I0131 12:56:12.136964 9 e2e.go:243] Starting e2e run "f500b07c-89f9-4588-b07d-6f1b18ca7724" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580475370 - Will randomize all specs Will run 215 of 4412 specs Jan 31 12:56:12.541: INFO: >>> kubeConfig: /root/.kube/config Jan 31 12:56:12.553: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 31 12:56:12.634: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 31 12:56:12.668: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 31 12:56:12.668: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 31 12:56:12.668: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 31 12:56:12.674: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 31 12:56:12.674: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 31 12:56:12.674: INFO: e2e test version: v1.15.7 Jan 31 12:56:12.676: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 12:56:12.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Jan 31 12:56:12.765: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 31 12:56:12.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1290' Jan 31 12:56:14.403: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 31 12:56:14.403: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Jan 31 12:56:14.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1290' Jan 31 12:56:14.718: INFO: stderr: "" Jan 31 12:56:14.718: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 12:56:14.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1290" for this suite. Jan 31 12:56:22.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 12:56:22.975: INFO: namespace kubectl-1290 deletion completed in 8.208710204s • [SLOW TEST:10.299 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 12:56:22.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 31 12:56:23.109: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2c1e2b0-7619-422d-9df9-40970207724e" in namespace "downward-api-8965" to be "success or failure" Jan 31 12:56:23.125: INFO: Pod "downwardapi-volume-a2c1e2b0-7619-422d-9df9-40970207724e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.765839ms Jan 31 12:56:25.141: INFO: Pod "downwardapi-volume-a2c1e2b0-7619-422d-9df9-40970207724e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032398403s Jan 31 12:56:27.152: INFO: Pod "downwardapi-volume-a2c1e2b0-7619-422d-9df9-40970207724e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043662398s Jan 31 12:56:29.163: INFO: Pod "downwardapi-volume-a2c1e2b0-7619-422d-9df9-40970207724e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054231767s Jan 31 12:56:31.175: INFO: Pod "downwardapi-volume-a2c1e2b0-7619-422d-9df9-40970207724e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066118787s Jan 31 12:56:33.186: INFO: Pod "downwardapi-volume-a2c1e2b0-7619-422d-9df9-40970207724e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.076804963s Jan 31 12:56:35.201: INFO: Pod "downwardapi-volume-a2c1e2b0-7619-422d-9df9-40970207724e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.091848045s STEP: Saw pod success Jan 31 12:56:35.201: INFO: Pod "downwardapi-volume-a2c1e2b0-7619-422d-9df9-40970207724e" satisfied condition "success or failure" Jan 31 12:56:35.207: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a2c1e2b0-7619-422d-9df9-40970207724e container client-container: STEP: delete the pod Jan 31 12:56:35.599: INFO: Waiting for pod downwardapi-volume-a2c1e2b0-7619-422d-9df9-40970207724e to disappear Jan 31 12:56:35.678: INFO: Pod downwardapi-volume-a2c1e2b0-7619-422d-9df9-40970207724e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 12:56:35.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8965" for this suite. Jan 31 12:56:41.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 12:56:42.076: INFO: namespace downward-api-8965 deletion completed in 6.388479284s • [SLOW TEST:19.100 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 12:56:42.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-342ae9b4-6c15-4bc1-884d-a565cc1c4479 STEP: Creating a pod to test consume secrets Jan 31 12:56:42.217: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3600cfef-bca1-489e-a290-4185280d84a2" in namespace "projected-6093" to be "success or failure" Jan 31 12:56:42.259: INFO: Pod "pod-projected-secrets-3600cfef-bca1-489e-a290-4185280d84a2": Phase="Pending", Reason="", readiness=false. Elapsed: 41.39867ms Jan 31 12:56:44.273: INFO: Pod "pod-projected-secrets-3600cfef-bca1-489e-a290-4185280d84a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054949546s Jan 31 12:56:46.290: INFO: Pod "pod-projected-secrets-3600cfef-bca1-489e-a290-4185280d84a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072686915s Jan 31 12:56:48.299: INFO: Pod "pod-projected-secrets-3600cfef-bca1-489e-a290-4185280d84a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081277787s Jan 31 12:56:50.314: INFO: Pod "pod-projected-secrets-3600cfef-bca1-489e-a290-4185280d84a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096846914s STEP: Saw pod success Jan 31 12:56:50.315: INFO: Pod "pod-projected-secrets-3600cfef-bca1-489e-a290-4185280d84a2" satisfied condition "success or failure" Jan 31 12:56:50.321: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3600cfef-bca1-489e-a290-4185280d84a2 container projected-secret-volume-test: STEP: delete the pod Jan 31 12:56:50.478: INFO: Waiting for pod pod-projected-secrets-3600cfef-bca1-489e-a290-4185280d84a2 to disappear Jan 31 12:56:50.485: INFO: Pod pod-projected-secrets-3600cfef-bca1-489e-a290-4185280d84a2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 12:56:50.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6093" for this suite. Jan 31 12:56:56.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 12:56:56.625: INFO: namespace projected-6093 deletion completed in 6.134130221s • [SLOW TEST:14.549 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 12:56:56.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-4089dffc-6c82-4c41-ba41-130afc6a4ad6 STEP: Creating secret with name secret-projected-all-test-volume-ee6f2486-27c5-4762-8bff-8fb15ba3d5da STEP: Creating a pod to test Check all projections for projected volume plugin Jan 31 12:56:56.783: INFO: Waiting up to 5m0s for pod "projected-volume-81d408d9-aa68-4cb7-b0b1-2a48480f52dc" in namespace "projected-9785" to be "success or failure" Jan 31 12:56:56.801: INFO: Pod "projected-volume-81d408d9-aa68-4cb7-b0b1-2a48480f52dc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.108529ms Jan 31 12:56:58.810: INFO: Pod "projected-volume-81d408d9-aa68-4cb7-b0b1-2a48480f52dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02703572s Jan 31 12:57:00.820: INFO: Pod "projected-volume-81d408d9-aa68-4cb7-b0b1-2a48480f52dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037307473s Jan 31 12:57:02.839: INFO: Pod "projected-volume-81d408d9-aa68-4cb7-b0b1-2a48480f52dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056020793s Jan 31 12:57:04.859: INFO: Pod "projected-volume-81d408d9-aa68-4cb7-b0b1-2a48480f52dc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076458616s Jan 31 12:57:06.881: INFO: Pod "projected-volume-81d408d9-aa68-4cb7-b0b1-2a48480f52dc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.098698794s Jan 31 12:57:08.896: INFO: Pod "projected-volume-81d408d9-aa68-4cb7-b0b1-2a48480f52dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.113244505s STEP: Saw pod success Jan 31 12:57:08.896: INFO: Pod "projected-volume-81d408d9-aa68-4cb7-b0b1-2a48480f52dc" satisfied condition "success or failure" Jan 31 12:57:08.904: INFO: Trying to get logs from node iruya-node pod projected-volume-81d408d9-aa68-4cb7-b0b1-2a48480f52dc container projected-all-volume-test: STEP: delete the pod Jan 31 12:57:09.003: INFO: Waiting for pod projected-volume-81d408d9-aa68-4cb7-b0b1-2a48480f52dc to disappear Jan 31 12:57:09.012: INFO: Pod projected-volume-81d408d9-aa68-4cb7-b0b1-2a48480f52dc no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 12:57:09.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9785" for this suite. Jan 31 12:57:15.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 12:57:15.189: INFO: namespace projected-9785 deletion completed in 6.165018144s • [SLOW TEST:18.563 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 12:57:15.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 31 12:57:15.352: INFO: Waiting up to 5m0s for pod "downward-api-c1513487-d72c-46e0-828d-edf1467daf0f" in namespace "downward-api-9861" to be "success or failure" Jan 31 12:57:15.370: INFO: Pod "downward-api-c1513487-d72c-46e0-828d-edf1467daf0f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.359254ms Jan 31 12:57:17.379: INFO: Pod "downward-api-c1513487-d72c-46e0-828d-edf1467daf0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02718604s Jan 31 12:57:19.389: INFO: Pod "downward-api-c1513487-d72c-46e0-828d-edf1467daf0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037364414s Jan 31 12:57:21.399: INFO: Pod "downward-api-c1513487-d72c-46e0-828d-edf1467daf0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046854768s Jan 31 12:57:24.132: INFO: Pod "downward-api-c1513487-d72c-46e0-828d-edf1467daf0f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.780028647s Jan 31 12:57:26.144: INFO: Pod "downward-api-c1513487-d72c-46e0-828d-edf1467daf0f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.792034412s Jan 31 12:57:28.158: INFO: Pod "downward-api-c1513487-d72c-46e0-828d-edf1467daf0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.805781682s STEP: Saw pod success Jan 31 12:57:28.158: INFO: Pod "downward-api-c1513487-d72c-46e0-828d-edf1467daf0f" satisfied condition "success or failure" Jan 31 12:57:28.161: INFO: Trying to get logs from node iruya-node pod downward-api-c1513487-d72c-46e0-828d-edf1467daf0f container dapi-container: STEP: delete the pod Jan 31 12:57:28.382: INFO: Waiting for pod downward-api-c1513487-d72c-46e0-828d-edf1467daf0f to disappear Jan 31 12:57:28.397: INFO: Pod downward-api-c1513487-d72c-46e0-828d-edf1467daf0f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 12:57:28.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9861" for this suite. Jan 31 12:57:34.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 12:57:34.664: INFO: namespace downward-api-9861 deletion completed in 6.189694923s • [SLOW TEST:19.474 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 12:57:34.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-67913d39-6199-4f94-aa53-42bff128173e in namespace container-probe-6091 Jan 31 12:57:42.761: INFO: Started pod busybox-67913d39-6199-4f94-aa53-42bff128173e in namespace container-probe-6091 STEP: checking the pod's current state and verifying that restartCount is present Jan 31 12:57:42.769: INFO: Initial restart count of pod busybox-67913d39-6199-4f94-aa53-42bff128173e is 0 Jan 31 12:58:37.511: INFO: Restart count of pod container-probe-6091/busybox-67913d39-6199-4f94-aa53-42bff128173e is now 1 (54.742023109s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 12:58:37.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6091" for this suite. Jan 31 12:58:43.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 12:58:43.929: INFO: namespace container-probe-6091 deletion completed in 6.366438058s • [SLOW TEST:69.265 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 12:58:43.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-cc862959-cefa-4019-9ce1-31fae875907a in namespace container-probe-4456 Jan 31 12:58:54.164: INFO: Started pod test-webserver-cc862959-cefa-4019-9ce1-31fae875907a in namespace container-probe-4456 STEP: checking the pod's current state and verifying that restartCount is present Jan 31 12:58:54.168: INFO: Initial restart count of pod test-webserver-cc862959-cefa-4019-9ce1-31fae875907a is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:02:54.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4456" for this suite. Jan 31 13:03:00.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:03:00.558: INFO: namespace container-probe-4456 deletion completed in 6.282530223s • [SLOW TEST:256.628 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:03:00.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-bbd02115-4533-4581-bb0d-1634b62682c8 STEP: Creating a pod to test consume secrets Jan 31 13:03:00.790: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6ce5f10b-fa2e-40f1-819a-0baa742e0c53" in namespace "projected-5173" to be "success or failure" Jan 31 13:03:00.871: INFO: Pod "pod-projected-secrets-6ce5f10b-fa2e-40f1-819a-0baa742e0c53": Phase="Pending", Reason="", readiness=false. Elapsed: 81.743331ms Jan 31 13:03:03.491: INFO: Pod "pod-projected-secrets-6ce5f10b-fa2e-40f1-819a-0baa742e0c53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.701513038s Jan 31 13:03:05.502: INFO: Pod "pod-projected-secrets-6ce5f10b-fa2e-40f1-819a-0baa742e0c53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.712070281s Jan 31 13:03:07.511: INFO: Pod "pod-projected-secrets-6ce5f10b-fa2e-40f1-819a-0baa742e0c53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.721241731s Jan 31 13:03:09.518: INFO: Pod "pod-projected-secrets-6ce5f10b-fa2e-40f1-819a-0baa742e0c53": Phase="Pending", Reason="", readiness=false. Elapsed: 8.72882681s Jan 31 13:03:11.528: INFO: Pod "pod-projected-secrets-6ce5f10b-fa2e-40f1-819a-0baa742e0c53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.738762782s STEP: Saw pod success Jan 31 13:03:11.529: INFO: Pod "pod-projected-secrets-6ce5f10b-fa2e-40f1-819a-0baa742e0c53" satisfied condition "success or failure" Jan 31 13:03:11.535: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-6ce5f10b-fa2e-40f1-819a-0baa742e0c53 container projected-secret-volume-test: STEP: delete the pod Jan 31 13:03:11.686: INFO: Waiting for pod pod-projected-secrets-6ce5f10b-fa2e-40f1-819a-0baa742e0c53 to disappear Jan 31 13:03:11.701: INFO: Pod pod-projected-secrets-6ce5f10b-fa2e-40f1-819a-0baa742e0c53 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:03:11.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5173" for this suite. Jan 31 13:03:17.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:03:17.969: INFO: namespace projected-5173 deletion completed in 6.246924699s • [SLOW TEST:17.410 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:03:17.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 31 13:03:18.148: INFO: Waiting up to 5m0s for pod "pod-931a6062-125c-404e-9cf7-5e50dbfff23b" in namespace "emptydir-1620" to be "success or failure" Jan 31 13:03:18.173: INFO: Pod "pod-931a6062-125c-404e-9cf7-5e50dbfff23b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.735124ms Jan 31 13:03:20.188: INFO: Pod "pod-931a6062-125c-404e-9cf7-5e50dbfff23b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039752613s Jan 31 13:03:22.197: INFO: Pod "pod-931a6062-125c-404e-9cf7-5e50dbfff23b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049028146s Jan 31 13:03:24.334: INFO: Pod "pod-931a6062-125c-404e-9cf7-5e50dbfff23b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18598868s Jan 31 13:03:26.347: INFO: Pod "pod-931a6062-125c-404e-9cf7-5e50dbfff23b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.198721899s STEP: Saw pod success Jan 31 13:03:26.347: INFO: Pod "pod-931a6062-125c-404e-9cf7-5e50dbfff23b" satisfied condition "success or failure" Jan 31 13:03:26.350: INFO: Trying to get logs from node iruya-node pod pod-931a6062-125c-404e-9cf7-5e50dbfff23b container test-container: STEP: delete the pod Jan 31 13:03:26.406: INFO: Waiting for pod pod-931a6062-125c-404e-9cf7-5e50dbfff23b to disappear Jan 31 13:03:26.479: INFO: Pod pod-931a6062-125c-404e-9cf7-5e50dbfff23b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:03:26.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1620" for this suite. Jan 31 13:03:32.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:03:32.646: INFO: namespace emptydir-1620 deletion completed in 6.15612576s • [SLOW TEST:14.676 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:03:32.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-1496 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1496 to expose endpoints map[] Jan 31 13:03:32.811: INFO: successfully validated that service multi-endpoint-test in namespace services-1496 exposes endpoints map[] (10.785452ms elapsed) STEP: Creating pod pod1 in namespace services-1496 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1496 to expose endpoints map[pod1:[100]] Jan 31 13:03:37.064: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.223781156s elapsed, will retry) Jan 31 13:03:40.121: INFO: successfully validated that service multi-endpoint-test in namespace services-1496 exposes endpoints map[pod1:[100]] (7.280176012s elapsed) STEP: Creating pod pod2 in namespace services-1496 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1496 to expose endpoints map[pod1:[100] pod2:[101]] Jan 31 13:03:45.060: INFO: Unexpected endpoints: found map[c4eb26c3-c0f6-4717-8836-aa5368e27a5d:[100]], expected map[pod1:[100] pod2:[101]] (4.934170071s elapsed, will retry) Jan 31 13:03:48.152: INFO: successfully validated that service multi-endpoint-test in namespace services-1496 exposes endpoints map[pod1:[100] pod2:[101]] (8.026012655s elapsed) STEP: Deleting pod pod1 in namespace services-1496 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1496 to expose endpoints map[pod2:[101]] Jan 31 13:03:48.226: INFO: successfully validated that service multi-endpoint-test in namespace services-1496 exposes endpoints map[pod2:[101]] (60.679484ms elapsed) STEP: Deleting pod pod2 in namespace services-1496 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1496 to expose endpoints map[] Jan 31 13:03:48.349: INFO: successfully validated that service multi-endpoint-test in namespace services-1496 exposes endpoints map[] (111.162928ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:03:48.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1496" for this suite. Jan 31 13:04:12.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:04:12.569: INFO: namespace services-1496 deletion completed in 24.165869898s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:39.923 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:04:12.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 31 13:04:20.346: INFO: 10 pods remaining Jan 31 13:04:20.346: INFO: 9 pods has nil DeletionTimestamp Jan 31 13:04:20.346: INFO: Jan 31 13:04:20.811: INFO: 0 pods remaining Jan 31 13:04:20.811: INFO: 0 pods has nil DeletionTimestamp Jan 31 13:04:20.811: INFO: STEP: Gathering metrics W0131 13:04:21.761383 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 31 13:04:21.761: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:04:21.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4615" for this suite. Jan 31 13:04:31.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:04:32.130: INFO: namespace gc-4615 deletion completed in 10.365387236s • [SLOW TEST:19.560 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:04:32.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Jan 31 13:04:32.241: INFO: Waiting up to 5m0s for pod "client-containers-8053e80f-03f3-4f00-971b-073ba3ad96d9" in namespace "containers-6627" to be "success or failure" Jan 31 13:04:32.289: INFO: Pod "client-containers-8053e80f-03f3-4f00-971b-073ba3ad96d9": Phase="Pending", Reason="", readiness=false. Elapsed: 47.789541ms Jan 31 13:04:34.303: INFO: Pod "client-containers-8053e80f-03f3-4f00-971b-073ba3ad96d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061862107s Jan 31 13:04:36.318: INFO: Pod "client-containers-8053e80f-03f3-4f00-971b-073ba3ad96d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07704301s Jan 31 13:04:38.332: INFO: Pod "client-containers-8053e80f-03f3-4f00-971b-073ba3ad96d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090820081s Jan 31 13:04:40.341: INFO: Pod "client-containers-8053e80f-03f3-4f00-971b-073ba3ad96d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099552767s Jan 31 13:04:42.350: INFO: Pod "client-containers-8053e80f-03f3-4f00-971b-073ba3ad96d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108874833s STEP: Saw pod success Jan 31 13:04:42.350: INFO: Pod "client-containers-8053e80f-03f3-4f00-971b-073ba3ad96d9" satisfied condition "success or failure" Jan 31 13:04:42.353: INFO: Trying to get logs from node iruya-node pod client-containers-8053e80f-03f3-4f00-971b-073ba3ad96d9 container test-container: STEP: delete the pod Jan 31 13:04:42.397: INFO: Waiting for pod client-containers-8053e80f-03f3-4f00-971b-073ba3ad96d9 to disappear Jan 31 13:04:42.402: INFO: Pod client-containers-8053e80f-03f3-4f00-971b-073ba3ad96d9 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:04:42.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6627" for this suite. Jan 31 13:04:48.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:04:48.626: INFO: namespace containers-6627 deletion completed in 6.186060825s • [SLOW TEST:16.495 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:04:48.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-9161/secret-test-7c0c01d8-11c7-4534-8cb9-625e2e35a6f6 STEP: Creating a pod to test consume secrets Jan 31 13:04:48.769: INFO: Waiting up to 5m0s for pod "pod-configmaps-81a78531-9a01-493f-a106-8de6a0f9cc57" in namespace "secrets-9161" to be "success or failure" Jan 31 13:04:48.781: INFO: Pod "pod-configmaps-81a78531-9a01-493f-a106-8de6a0f9cc57": Phase="Pending", Reason="", readiness=false. Elapsed: 11.966ms Jan 31 13:04:50.820: INFO: Pod "pod-configmaps-81a78531-9a01-493f-a106-8de6a0f9cc57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050910618s Jan 31 13:04:52.833: INFO: Pod "pod-configmaps-81a78531-9a01-493f-a106-8de6a0f9cc57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06342791s Jan 31 13:04:54.847: INFO: Pod "pod-configmaps-81a78531-9a01-493f-a106-8de6a0f9cc57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078013195s Jan 31 13:04:56.874: INFO: Pod "pod-configmaps-81a78531-9a01-493f-a106-8de6a0f9cc57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104366032s STEP: Saw pod success Jan 31 13:04:56.874: INFO: Pod "pod-configmaps-81a78531-9a01-493f-a106-8de6a0f9cc57" satisfied condition "success or failure" Jan 31 13:04:56.883: INFO: Trying to get logs from node iruya-node pod pod-configmaps-81a78531-9a01-493f-a106-8de6a0f9cc57 container env-test: STEP: delete the pod Jan 31 13:04:56.991: INFO: Waiting for pod pod-configmaps-81a78531-9a01-493f-a106-8de6a0f9cc57 to disappear Jan 31 13:04:56.997: INFO: Pod pod-configmaps-81a78531-9a01-493f-a106-8de6a0f9cc57 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:04:56.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9161" for this suite. Jan 31 13:05:03.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:05:03.185: INFO: namespace secrets-9161 deletion completed in 6.176227855s • [SLOW TEST:14.558 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:05:03.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 31 13:05:03.256: INFO: Waiting up to 5m0s for pod "pod-59b0ea9b-2a2a-4d04-90f0-fd2e4c4488df" in namespace "emptydir-2963" to be "success or failure" Jan 31 13:05:03.262: INFO: Pod "pod-59b0ea9b-2a2a-4d04-90f0-fd2e4c4488df": Phase="Pending", Reason="", readiness=false. Elapsed: 5.828698ms Jan 31 13:05:05.272: INFO: Pod "pod-59b0ea9b-2a2a-4d04-90f0-fd2e4c4488df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016325323s Jan 31 13:05:07.284: INFO: Pod "pod-59b0ea9b-2a2a-4d04-90f0-fd2e4c4488df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02813495s Jan 31 13:05:09.297: INFO: Pod "pod-59b0ea9b-2a2a-4d04-90f0-fd2e4c4488df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041024485s Jan 31 13:05:11.304: INFO: Pod "pod-59b0ea9b-2a2a-4d04-90f0-fd2e4c4488df": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048507299s Jan 31 13:05:13.312: INFO: Pod "pod-59b0ea9b-2a2a-4d04-90f0-fd2e4c4488df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056560752s STEP: Saw pod success Jan 31 13:05:13.312: INFO: Pod "pod-59b0ea9b-2a2a-4d04-90f0-fd2e4c4488df" satisfied condition "success or failure" Jan 31 13:05:13.316: INFO: Trying to get logs from node iruya-node pod pod-59b0ea9b-2a2a-4d04-90f0-fd2e4c4488df container test-container: STEP: delete the pod Jan 31 13:05:13.377: INFO: Waiting for pod pod-59b0ea9b-2a2a-4d04-90f0-fd2e4c4488df to disappear Jan 31 13:05:13.426: INFO: Pod pod-59b0ea9b-2a2a-4d04-90f0-fd2e4c4488df no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:05:13.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2963" for this suite. Jan 31 13:05:19.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:05:19.634: INFO: namespace emptydir-2963 deletion completed in 6.197906972s • [SLOW TEST:16.449 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:05:19.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 31 13:05:19.732: INFO: Waiting up to 5m0s for pod "downward-api-44a28ca1-f02c-4285-bd71-44de409ba088" in namespace "downward-api-1758" to be "success or failure" Jan 31 13:05:19.800: INFO: Pod "downward-api-44a28ca1-f02c-4285-bd71-44de409ba088": Phase="Pending", Reason="", readiness=false. Elapsed: 67.937403ms Jan 31 13:05:22.127: INFO: Pod "downward-api-44a28ca1-f02c-4285-bd71-44de409ba088": Phase="Pending", Reason="", readiness=false. Elapsed: 2.394286877s Jan 31 13:05:24.139: INFO: Pod "downward-api-44a28ca1-f02c-4285-bd71-44de409ba088": Phase="Pending", Reason="", readiness=false. Elapsed: 4.406959556s Jan 31 13:05:26.160: INFO: Pod "downward-api-44a28ca1-f02c-4285-bd71-44de409ba088": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427980319s Jan 31 13:05:28.181: INFO: Pod "downward-api-44a28ca1-f02c-4285-bd71-44de409ba088": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.448350192s STEP: Saw pod success Jan 31 13:05:28.181: INFO: Pod "downward-api-44a28ca1-f02c-4285-bd71-44de409ba088" satisfied condition "success or failure" Jan 31 13:05:28.187: INFO: Trying to get logs from node iruya-node pod downward-api-44a28ca1-f02c-4285-bd71-44de409ba088 container dapi-container: STEP: delete the pod Jan 31 13:05:28.351: INFO: Waiting for pod downward-api-44a28ca1-f02c-4285-bd71-44de409ba088 to disappear Jan 31 13:05:28.358: INFO: Pod downward-api-44a28ca1-f02c-4285-bd71-44de409ba088 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:05:28.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1758" for this suite. Jan 31 13:05:34.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:05:34.629: INFO: namespace downward-api-1758 deletion completed in 6.260879338s • [SLOW TEST:14.994 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:05:34.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 31 13:05:34.791: INFO: Waiting up to 5m0s for pod "pod-3f5e4b31-7d3e-43f7-9dc3-fd6b1fce318b" in namespace "emptydir-6762" to be "success or failure" Jan 31 13:05:34.809: INFO: Pod "pod-3f5e4b31-7d3e-43f7-9dc3-fd6b1fce318b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.806997ms Jan 31 13:05:36.828: INFO: Pod "pod-3f5e4b31-7d3e-43f7-9dc3-fd6b1fce318b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036544817s Jan 31 13:05:38.840: INFO: Pod "pod-3f5e4b31-7d3e-43f7-9dc3-fd6b1fce318b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048216651s Jan 31 13:05:40.854: INFO: Pod "pod-3f5e4b31-7d3e-43f7-9dc3-fd6b1fce318b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062697952s Jan 31 13:05:42.885: INFO: Pod "pod-3f5e4b31-7d3e-43f7-9dc3-fd6b1fce318b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093297643s Jan 31 13:05:44.905: INFO: Pod "pod-3f5e4b31-7d3e-43f7-9dc3-fd6b1fce318b": Phase="Running", Reason="", readiness=true. Elapsed: 10.113401673s Jan 31 13:05:46.917: INFO: Pod "pod-3f5e4b31-7d3e-43f7-9dc3-fd6b1fce318b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.125173429s STEP: Saw pod success Jan 31 13:05:46.917: INFO: Pod "pod-3f5e4b31-7d3e-43f7-9dc3-fd6b1fce318b" satisfied condition "success or failure" Jan 31 13:05:46.923: INFO: Trying to get logs from node iruya-node pod pod-3f5e4b31-7d3e-43f7-9dc3-fd6b1fce318b container test-container: STEP: delete the pod Jan 31 13:05:47.003: INFO: Waiting for pod pod-3f5e4b31-7d3e-43f7-9dc3-fd6b1fce318b to disappear Jan 31 13:05:47.019: INFO: Pod pod-3f5e4b31-7d3e-43f7-9dc3-fd6b1fce318b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:05:47.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6762" for this suite. Jan 31 13:05:53.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:05:53.154: INFO: namespace emptydir-6762 deletion completed in 6.125992037s • [SLOW TEST:18.524 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:05:53.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a20c057d-f738-4c60-9ca2-1abb2f2635bc STEP: Creating a pod to test consume secrets Jan 31 13:05:53.342: INFO: Waiting up to 5m0s for pod "pod-secrets-a6ac3b42-fa3f-48b5-b687-e0b7f49dd266" in namespace "secrets-4982" to be "success or failure" Jan 31 13:05:53.347: INFO: Pod "pod-secrets-a6ac3b42-fa3f-48b5-b687-e0b7f49dd266": Phase="Pending", Reason="", readiness=false. Elapsed: 5.124445ms Jan 31 13:05:55.358: INFO: Pod "pod-secrets-a6ac3b42-fa3f-48b5-b687-e0b7f49dd266": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016255739s Jan 31 13:05:57.373: INFO: Pod "pod-secrets-a6ac3b42-fa3f-48b5-b687-e0b7f49dd266": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030867514s Jan 31 13:05:59.384: INFO: Pod "pod-secrets-a6ac3b42-fa3f-48b5-b687-e0b7f49dd266": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042069223s Jan 31 13:06:01.398: INFO: Pod "pod-secrets-a6ac3b42-fa3f-48b5-b687-e0b7f49dd266": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055712195s Jan 31 13:06:03.414: INFO: Pod "pod-secrets-a6ac3b42-fa3f-48b5-b687-e0b7f49dd266": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072268386s Jan 31 13:06:05.426: INFO: Pod "pod-secrets-a6ac3b42-fa3f-48b5-b687-e0b7f49dd266": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.083852485s STEP: Saw pod success Jan 31 13:06:05.426: INFO: Pod "pod-secrets-a6ac3b42-fa3f-48b5-b687-e0b7f49dd266" satisfied condition "success or failure" Jan 31 13:06:05.432: INFO: Trying to get logs from node iruya-node pod pod-secrets-a6ac3b42-fa3f-48b5-b687-e0b7f49dd266 container secret-volume-test: STEP: delete the pod Jan 31 13:06:05.477: INFO: Waiting for pod pod-secrets-a6ac3b42-fa3f-48b5-b687-e0b7f49dd266 to disappear Jan 31 13:06:05.502: INFO: Pod pod-secrets-a6ac3b42-fa3f-48b5-b687-e0b7f49dd266 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:06:05.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4982" for this suite. Jan 31 13:06:11.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:06:11.896: INFO: namespace secrets-4982 deletion completed in 6.389095047s • [SLOW TEST:18.742 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:06:11.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-c93c0611-f93e-4fd4-a631-6887ffa75083 in namespace container-probe-2517 Jan 31 13:06:24.573: INFO: Started pod busybox-c93c0611-f93e-4fd4-a631-6887ffa75083 in namespace container-probe-2517 STEP: checking the pod's current state and verifying that restartCount is present Jan 31 13:06:24.581: INFO: Initial restart count of pod busybox-c93c0611-f93e-4fd4-a631-6887ffa75083 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:10:26.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2517" for this suite. Jan 31 13:10:32.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:10:32.796: INFO: namespace container-probe-2517 deletion completed in 6.152069946s • [SLOW TEST:260.899 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:10:32.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 31 13:10:33.003: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6913,SelfLink:/api/v1/namespaces/watch-6913/configmaps/e2e-watch-test-label-changed,UID:6b587f03-09e0-4cb9-9c76-40877122be08,ResourceVersion:22562306,Generation:0,CreationTimestamp:2020-01-31 13:10:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 31 13:10:33.004: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6913,SelfLink:/api/v1/namespaces/watch-6913/configmaps/e2e-watch-test-label-changed,UID:6b587f03-09e0-4cb9-9c76-40877122be08,ResourceVersion:22562307,Generation:0,CreationTimestamp:2020-01-31 13:10:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 31 13:10:33.005: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6913,SelfLink:/api/v1/namespaces/watch-6913/configmaps/e2e-watch-test-label-changed,UID:6b587f03-09e0-4cb9-9c76-40877122be08,ResourceVersion:22562308,Generation:0,CreationTimestamp:2020-01-31 13:10:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 31 13:10:43.119: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6913,SelfLink:/api/v1/namespaces/watch-6913/configmaps/e2e-watch-test-label-changed,UID:6b587f03-09e0-4cb9-9c76-40877122be08,ResourceVersion:22562323,Generation:0,CreationTimestamp:2020-01-31 13:10:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 31 13:10:43.120: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6913,SelfLink:/api/v1/namespaces/watch-6913/configmaps/e2e-watch-test-label-changed,UID:6b587f03-09e0-4cb9-9c76-40877122be08,ResourceVersion:22562324,Generation:0,CreationTimestamp:2020-01-31 13:10:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 31 13:10:43.120: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6913,SelfLink:/api/v1/namespaces/watch-6913/configmaps/e2e-watch-test-label-changed,UID:6b587f03-09e0-4cb9-9c76-40877122be08,ResourceVersion:22562325,Generation:0,CreationTimestamp:2020-01-31 13:10:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:10:43.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6913" for this suite. Jan 31 13:10:49.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:10:49.250: INFO: namespace watch-6913 deletion completed in 6.118126265s • [SLOW TEST:16.453 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:10:49.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 31 13:10:49.346: INFO: Waiting up to 5m0s for pod "pod-56efa9c4-0a35-4f86-b884-aaf5d1220462" in namespace "emptydir-1844" to be "success or failure" Jan 31 13:10:49.354: INFO: Pod "pod-56efa9c4-0a35-4f86-b884-aaf5d1220462": Phase="Pending", Reason="", readiness=false. Elapsed: 7.407629ms Jan 31 13:10:51.374: INFO: Pod "pod-56efa9c4-0a35-4f86-b884-aaf5d1220462": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027488996s Jan 31 13:10:53.390: INFO: Pod "pod-56efa9c4-0a35-4f86-b884-aaf5d1220462": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043692034s Jan 31 13:10:55.401: INFO: Pod "pod-56efa9c4-0a35-4f86-b884-aaf5d1220462": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054790889s Jan 31 13:10:57.418: INFO: Pod "pod-56efa9c4-0a35-4f86-b884-aaf5d1220462": Phase="Running", Reason="", readiness=true. Elapsed: 8.071344682s Jan 31 13:10:59.437: INFO: Pod "pod-56efa9c4-0a35-4f86-b884-aaf5d1220462": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090395132s STEP: Saw pod success Jan 31 13:10:59.437: INFO: Pod "pod-56efa9c4-0a35-4f86-b884-aaf5d1220462" satisfied condition "success or failure" Jan 31 13:10:59.445: INFO: Trying to get logs from node iruya-node pod pod-56efa9c4-0a35-4f86-b884-aaf5d1220462 container test-container: STEP: delete the pod Jan 31 13:10:59.632: INFO: Waiting for pod pod-56efa9c4-0a35-4f86-b884-aaf5d1220462 to disappear Jan 31 13:10:59.644: INFO: Pod pod-56efa9c4-0a35-4f86-b884-aaf5d1220462 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:10:59.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1844" for this suite. Jan 31 13:11:05.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:11:05.830: INFO: namespace emptydir-1844 deletion completed in 6.173191744s • [SLOW TEST:16.580 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:11:05.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 31 13:11:05.974: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a645698-44a8-4d1f-92a7-6df28780ba16" in namespace "downward-api-946" to be "success or failure" Jan 31 13:11:05.983: INFO: Pod "downwardapi-volume-8a645698-44a8-4d1f-92a7-6df28780ba16": Phase="Pending", Reason="", readiness=false. Elapsed: 9.066216ms Jan 31 13:11:07.989: INFO: Pod "downwardapi-volume-8a645698-44a8-4d1f-92a7-6df28780ba16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015143882s Jan 31 13:11:09.998: INFO: Pod "downwardapi-volume-8a645698-44a8-4d1f-92a7-6df28780ba16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023878765s Jan 31 13:11:12.012: INFO: Pod "downwardapi-volume-8a645698-44a8-4d1f-92a7-6df28780ba16": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037784558s Jan 31 13:11:14.018: INFO: Pod "downwardapi-volume-8a645698-44a8-4d1f-92a7-6df28780ba16": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044459973s Jan 31 13:11:16.024: INFO: Pod "downwardapi-volume-8a645698-44a8-4d1f-92a7-6df28780ba16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050290298s STEP: Saw pod success Jan 31 13:11:16.024: INFO: Pod "downwardapi-volume-8a645698-44a8-4d1f-92a7-6df28780ba16" satisfied condition "success or failure" Jan 31 13:11:16.027: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8a645698-44a8-4d1f-92a7-6df28780ba16 container client-container: STEP: delete the pod Jan 31 13:11:16.153: INFO: Waiting for pod downwardapi-volume-8a645698-44a8-4d1f-92a7-6df28780ba16 to disappear Jan 31 13:11:16.172: INFO: Pod downwardapi-volume-8a645698-44a8-4d1f-92a7-6df28780ba16 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:11:16.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-946" for this suite. Jan 31 13:11:22.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:11:22.347: INFO: namespace downward-api-946 deletion completed in 6.168953335s • [SLOW TEST:16.515 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:11:22.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-56459c04-f1aa-450f-b36a-aa25206bcf74 STEP: Creating configMap with name cm-test-opt-upd-c9d89de9-ade4-4064-bbbf-a44f7085ffab STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-56459c04-f1aa-450f-b36a-aa25206bcf74 STEP: Updating configmap cm-test-opt-upd-c9d89de9-ade4-4064-bbbf-a44f7085ffab STEP: Creating configMap with name cm-test-opt-create-87aaa36a-be9e-499f-badf-bb55640c4ff2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:12:44.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3917" for this suite. Jan 31 13:13:08.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:13:08.854: INFO: namespace projected-3917 deletion completed in 24.207148414s • [SLOW TEST:106.507 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:13:08.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-c4f97ba4-f7c6-437f-963a-b57d4e292e99 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-c4f97ba4-f7c6-437f-963a-b57d4e292e99 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:14:32.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3120" for this suite. Jan 31 13:14:54.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:14:54.888: INFO: namespace configmap-3120 deletion completed in 22.14722093s • [SLOW TEST:106.033 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:14:54.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 31 13:14:55.009: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7b3041e-eda4-4f15-8a95-de646684df62" in namespace "projected-7946" to be "success or failure" Jan 31 13:14:55.014: INFO: Pod "downwardapi-volume-d7b3041e-eda4-4f15-8a95-de646684df62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.18954ms Jan 31 13:14:57.020: INFO: Pod "downwardapi-volume-d7b3041e-eda4-4f15-8a95-de646684df62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010482096s Jan 31 13:14:59.029: INFO: Pod "downwardapi-volume-d7b3041e-eda4-4f15-8a95-de646684df62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019688419s Jan 31 13:15:01.038: INFO: Pod "downwardapi-volume-d7b3041e-eda4-4f15-8a95-de646684df62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028314731s Jan 31 13:15:03.090: INFO: Pod "downwardapi-volume-d7b3041e-eda4-4f15-8a95-de646684df62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080267334s STEP: Saw pod success Jan 31 13:15:03.090: INFO: Pod "downwardapi-volume-d7b3041e-eda4-4f15-8a95-de646684df62" satisfied condition "success or failure" Jan 31 13:15:03.094: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d7b3041e-eda4-4f15-8a95-de646684df62 container client-container: STEP: delete the pod Jan 31 13:15:03.181: INFO: Waiting for pod downwardapi-volume-d7b3041e-eda4-4f15-8a95-de646684df62 to disappear Jan 31 13:15:03.225: INFO: Pod downwardapi-volume-d7b3041e-eda4-4f15-8a95-de646684df62 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:15:03.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7946" for this suite. Jan 31 13:15:09.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:15:09.411: INFO: namespace projected-7946 deletion completed in 6.171907696s • [SLOW TEST:14.519 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:15:09.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 31 13:15:09.617: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3690,SelfLink:/api/v1/namespaces/watch-3690/configmaps/e2e-watch-test-watch-closed,UID:834a9106-062e-4094-86e8-d54aef521c39,ResourceVersion:22562827,Generation:0,CreationTimestamp:2020-01-31 13:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 31 13:15:09.618: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3690,SelfLink:/api/v1/namespaces/watch-3690/configmaps/e2e-watch-test-watch-closed,UID:834a9106-062e-4094-86e8-d54aef521c39,ResourceVersion:22562828,Generation:0,CreationTimestamp:2020-01-31 13:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 31 13:15:09.647: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3690,SelfLink:/api/v1/namespaces/watch-3690/configmaps/e2e-watch-test-watch-closed,UID:834a9106-062e-4094-86e8-d54aef521c39,ResourceVersion:22562829,Generation:0,CreationTimestamp:2020-01-31 13:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 31 13:15:09.648: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3690,SelfLink:/api/v1/namespaces/watch-3690/configmaps/e2e-watch-test-watch-closed,UID:834a9106-062e-4094-86e8-d54aef521c39,ResourceVersion:22562830,Generation:0,CreationTimestamp:2020-01-31 13:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:15:09.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3690" for this suite. Jan 31 13:15:15.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:15:15.866: INFO: namespace watch-3690 deletion completed in 6.211425394s • [SLOW TEST:6.455 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:15:15.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 31 13:15:16.035: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 31 13:15:16.051: INFO: Waiting for terminating namespaces to be deleted... Jan 31 13:15:16.053: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 31 13:15:16.063: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 31 13:15:16.063: INFO: Container weave ready: true, restart count 0 Jan 31 13:15:16.063: INFO: Container weave-npc ready: true, restart count 0 Jan 31 13:15:16.063: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 31 13:15:16.063: INFO: Container kube-proxy ready: true, restart count 0 Jan 31 13:15:16.063: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 31 13:15:16.073: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 31 13:15:16.073: INFO: Container etcd ready: true, restart count 0 Jan 31 13:15:16.073: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 31 13:15:16.073: INFO: Container weave ready: true, restart count 0 Jan 31 13:15:16.073: INFO: Container weave-npc ready: true, restart count 0 Jan 31 13:15:16.074: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 31 13:15:16.074: INFO: Container coredns ready: true, restart count 0 Jan 31 13:15:16.074: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 31 13:15:16.074: INFO: Container kube-controller-manager ready: true, restart count 19 Jan 31 13:15:16.074: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 31 13:15:16.074: INFO: Container kube-proxy ready: true, restart count 0 Jan 31 13:15:16.074: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 31 13:15:16.074: INFO: Container kube-apiserver ready: true, restart count 0 Jan 31 13:15:16.074: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 31 13:15:16.074: INFO: Container kube-scheduler ready: true, restart count 13 Jan 31 13:15:16.074: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 31 13:15:16.074: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Jan 31 13:15:16.197: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 31 13:15:16.197: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 31 13:15:16.197: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Jan 31 13:15:16.197: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Jan 31 13:15:16.197: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Jan 31 13:15:16.197: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Jan 31 13:15:16.197: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Jan 31 13:15:16.197: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 31 13:15:16.197: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Jan 31 13:15:16.197: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-75d816cc-ccbd-4e8e-b4b9-597ded948e6d.15eefb048f264186], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9271/filler-pod-75d816cc-ccbd-4e8e-b4b9-597ded948e6d to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-75d816cc-ccbd-4e8e-b4b9-597ded948e6d.15eefb059e4a7fce], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-75d816cc-ccbd-4e8e-b4b9-597ded948e6d.15eefb064b723b67], Reason = [Created], Message = [Created container filler-pod-75d816cc-ccbd-4e8e-b4b9-597ded948e6d] STEP: Considering event: Type = [Normal], Name = [filler-pod-75d816cc-ccbd-4e8e-b4b9-597ded948e6d.15eefb0671312912], Reason = [Started], Message = [Started container filler-pod-75d816cc-ccbd-4e8e-b4b9-597ded948e6d] STEP: Considering event: Type = [Normal], Name = [filler-pod-f9d2caa3-0d8c-4c99-bdb9-5eef398873d4.15eefb048e56b552], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9271/filler-pod-f9d2caa3-0d8c-4c99-bdb9-5eef398873d4 to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-f9d2caa3-0d8c-4c99-bdb9-5eef398873d4.15eefb05b79f755b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f9d2caa3-0d8c-4c99-bdb9-5eef398873d4.15eefb067b1fa010], Reason = [Created], Message = [Created container filler-pod-f9d2caa3-0d8c-4c99-bdb9-5eef398873d4] STEP: Considering event: Type = [Normal], Name = [filler-pod-f9d2caa3-0d8c-4c99-bdb9-5eef398873d4.15eefb06a39f934c], Reason = [Started], Message = [Started container filler-pod-f9d2caa3-0d8c-4c99-bdb9-5eef398873d4] STEP: Considering event: Type = [Warning], Name = [additional-pod.15eefb075cc6d861], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:15:29.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9271" for this suite. Jan 31 13:15:35.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:15:35.526: INFO: namespace sched-pred-9271 deletion completed in 6.151138785s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:19.658 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:15:35.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 31 13:15:36.861: INFO: Waiting up to 5m0s for pod "pod-afa1fa79-0312-4381-8652-de656e5e40e9" in namespace "emptydir-880" to be "success or failure" Jan 31 13:15:36.912: INFO: Pod "pod-afa1fa79-0312-4381-8652-de656e5e40e9": Phase="Pending", Reason="", readiness=false. Elapsed: 50.490699ms Jan 31 13:15:38.923: INFO: Pod "pod-afa1fa79-0312-4381-8652-de656e5e40e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061292461s Jan 31 13:15:40.930: INFO: Pod "pod-afa1fa79-0312-4381-8652-de656e5e40e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068753923s Jan 31 13:15:42.943: INFO: Pod "pod-afa1fa79-0312-4381-8652-de656e5e40e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081653483s Jan 31 13:15:44.955: INFO: Pod "pod-afa1fa79-0312-4381-8652-de656e5e40e9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093380843s Jan 31 13:15:46.966: INFO: Pod "pod-afa1fa79-0312-4381-8652-de656e5e40e9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.104091595s Jan 31 13:15:48.978: INFO: Pod "pod-afa1fa79-0312-4381-8652-de656e5e40e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.116406227s STEP: Saw pod success Jan 31 13:15:48.978: INFO: Pod "pod-afa1fa79-0312-4381-8652-de656e5e40e9" satisfied condition "success or failure" Jan 31 13:15:48.984: INFO: Trying to get logs from node iruya-node pod pod-afa1fa79-0312-4381-8652-de656e5e40e9 container test-container: STEP: delete the pod Jan 31 13:15:49.039: INFO: Waiting for pod pod-afa1fa79-0312-4381-8652-de656e5e40e9 to disappear Jan 31 13:15:49.083: INFO: Pod pod-afa1fa79-0312-4381-8652-de656e5e40e9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:15:49.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-880" for this suite. Jan 31 13:15:55.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:15:55.238: INFO: namespace emptydir-880 deletion completed in 6.151455674s • [SLOW TEST:19.712 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:15:55.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ea719336-c8f3-47d7-9088-c9c8acca07eb STEP: Creating a pod to test consume configMaps Jan 31 13:15:55.342: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e1ad886d-dbd3-46fe-964c-b6e1680c07ca" in namespace "projected-7342" to be "success or failure" Jan 31 13:15:55.354: INFO: Pod "pod-projected-configmaps-e1ad886d-dbd3-46fe-964c-b6e1680c07ca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.86101ms Jan 31 13:15:57.367: INFO: Pod "pod-projected-configmaps-e1ad886d-dbd3-46fe-964c-b6e1680c07ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024263249s Jan 31 13:15:59.378: INFO: Pod "pod-projected-configmaps-e1ad886d-dbd3-46fe-964c-b6e1680c07ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035820065s Jan 31 13:16:01.391: INFO: Pod "pod-projected-configmaps-e1ad886d-dbd3-46fe-964c-b6e1680c07ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048738664s Jan 31 13:16:03.400: INFO: Pod "pod-projected-configmaps-e1ad886d-dbd3-46fe-964c-b6e1680c07ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057452428s STEP: Saw pod success Jan 31 13:16:03.400: INFO: Pod "pod-projected-configmaps-e1ad886d-dbd3-46fe-964c-b6e1680c07ca" satisfied condition "success or failure" Jan 31 13:16:03.404: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-e1ad886d-dbd3-46fe-964c-b6e1680c07ca container projected-configmap-volume-test: STEP: delete the pod Jan 31 13:16:03.460: INFO: Waiting for pod pod-projected-configmaps-e1ad886d-dbd3-46fe-964c-b6e1680c07ca to disappear Jan 31 13:16:03.467: INFO: Pod pod-projected-configmaps-e1ad886d-dbd3-46fe-964c-b6e1680c07ca no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:16:03.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7342" for this suite. Jan 31 13:16:09.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:16:09.899: INFO: namespace projected-7342 deletion completed in 6.42163474s • [SLOW TEST:14.660 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:16:09.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 31 13:16:09.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4900' Jan 31 13:16:11.895: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 31 13:16:11.895: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jan 31 13:16:12.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-4900' Jan 31 13:16:12.261: INFO: stderr: "" Jan 31 13:16:12.261: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:16:12.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4900" for this suite. Jan 31 13:16:34.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:16:34.556: INFO: namespace kubectl-4900 deletion completed in 22.202319043s • [SLOW TEST:24.655 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:16:34.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2e8e9dcc-6ea3-45d6-ad7d-1caed4093491 STEP: Creating a pod to test consume secrets Jan 31 13:16:35.371: INFO: Waiting up to 5m0s for pod "pod-secrets-fb400862-6ced-477f-98d2-aad2f5040589" in namespace "secrets-8496" to be "success or failure" Jan 31 13:16:35.377: INFO: Pod "pod-secrets-fb400862-6ced-477f-98d2-aad2f5040589": Phase="Pending", Reason="", readiness=false. Elapsed: 5.367083ms Jan 31 13:16:37.384: INFO: Pod "pod-secrets-fb400862-6ced-477f-98d2-aad2f5040589": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012180032s Jan 31 13:16:39.486: INFO: Pod "pod-secrets-fb400862-6ced-477f-98d2-aad2f5040589": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113982391s Jan 31 13:16:41.493: INFO: Pod "pod-secrets-fb400862-6ced-477f-98d2-aad2f5040589": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121026344s Jan 31 13:16:43.500: INFO: Pod "pod-secrets-fb400862-6ced-477f-98d2-aad2f5040589": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.128621s STEP: Saw pod success Jan 31 13:16:43.500: INFO: Pod "pod-secrets-fb400862-6ced-477f-98d2-aad2f5040589" satisfied condition "success or failure" Jan 31 13:16:43.505: INFO: Trying to get logs from node iruya-node pod pod-secrets-fb400862-6ced-477f-98d2-aad2f5040589 container secret-env-test: STEP: delete the pod Jan 31 13:16:43.684: INFO: Waiting for pod pod-secrets-fb400862-6ced-477f-98d2-aad2f5040589 to disappear Jan 31 13:16:43.689: INFO: Pod pod-secrets-fb400862-6ced-477f-98d2-aad2f5040589 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:16:43.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8496" for this suite. Jan 31 13:16:49.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:16:49.938: INFO: namespace secrets-8496 deletion completed in 6.24006799s • [SLOW TEST:15.381 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:16:49.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 31 13:16:50.099: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 31 13:16:50.130: INFO: Number of nodes with available pods: 0 Jan 31 13:16:50.130: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:16:51.154: INFO: Number of nodes with available pods: 0 Jan 31 13:16:51.154: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:16:52.158: INFO: Number of nodes with available pods: 0 Jan 31 13:16:52.158: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:16:53.157: INFO: Number of nodes with available pods: 0 Jan 31 13:16:53.157: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:16:54.157: INFO: Number of nodes with available pods: 0 Jan 31 13:16:54.157: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:16:55.394: INFO: Number of nodes with available pods: 0 Jan 31 13:16:55.394: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:16:56.435: INFO: Number of nodes with available pods: 0 Jan 31 13:16:56.436: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:16:57.149: INFO: Number of nodes with available pods: 0 Jan 31 13:16:57.150: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:16:58.146: INFO: Number of nodes with available pods: 0 Jan 31 13:16:58.146: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:16:59.157: INFO: Number of nodes with available pods: 0 Jan 31 13:16:59.157: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:17:00.151: INFO: Number of nodes with available pods: 2 Jan 31 13:17:00.151: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 31 13:17:00.237: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:00.238: INFO: Wrong image for pod: daemon-set-t96rb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:01.276: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:01.276: INFO: Wrong image for pod: daemon-set-t96rb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:02.728: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:02.728: INFO: Wrong image for pod: daemon-set-t96rb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:03.284: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:03.284: INFO: Wrong image for pod: daemon-set-t96rb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:04.369: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:04.369: INFO: Wrong image for pod: daemon-set-t96rb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:05.275: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:05.275: INFO: Wrong image for pod: daemon-set-t96rb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:06.276: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:06.276: INFO: Wrong image for pod: daemon-set-t96rb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:06.276: INFO: Pod daemon-set-t96rb is not available Jan 31 13:17:07.270: INFO: Pod daemon-set-8qptt is not available Jan 31 13:17:07.270: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:08.276: INFO: Pod daemon-set-8qptt is not available Jan 31 13:17:08.276: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:09.277: INFO: Pod daemon-set-8qptt is not available Jan 31 13:17:09.277: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:10.270: INFO: Pod daemon-set-8qptt is not available Jan 31 13:17:10.270: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:11.720: INFO: Pod daemon-set-8qptt is not available Jan 31 13:17:11.721: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:12.289: INFO: Pod daemon-set-8qptt is not available Jan 31 13:17:12.290: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:13.317: INFO: Pod daemon-set-8qptt is not available Jan 31 13:17:13.317: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:14.277: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:15.279: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:16.271: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:17.276: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:18.275: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:19.291: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:19.292: INFO: Pod daemon-set-pnt62 is not available Jan 31 13:17:20.277: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:20.277: INFO: Pod daemon-set-pnt62 is not available Jan 31 13:17:21.276: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:21.276: INFO: Pod daemon-set-pnt62 is not available Jan 31 13:17:22.270: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:22.271: INFO: Pod daemon-set-pnt62 is not available Jan 31 13:17:23.273: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:23.273: INFO: Pod daemon-set-pnt62 is not available Jan 31 13:17:24.275: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:24.275: INFO: Pod daemon-set-pnt62 is not available Jan 31 13:17:25.273: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:25.274: INFO: Pod daemon-set-pnt62 is not available Jan 31 13:17:26.277: INFO: Wrong image for pod: daemon-set-pnt62. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 31 13:17:26.277: INFO: Pod daemon-set-pnt62 is not available Jan 31 13:17:27.274: INFO: Pod daemon-set-kzv8k is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 31 13:17:27.296: INFO: Number of nodes with available pods: 1 Jan 31 13:17:27.296: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:17:28.325: INFO: Number of nodes with available pods: 1 Jan 31 13:17:28.325: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:17:29.319: INFO: Number of nodes with available pods: 1 Jan 31 13:17:29.320: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:17:30.321: INFO: Number of nodes with available pods: 1 Jan 31 13:17:30.321: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:17:31.318: INFO: Number of nodes with available pods: 1 Jan 31 13:17:31.318: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:17:32.322: INFO: Number of nodes with available pods: 1 Jan 31 13:17:32.322: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:17:33.326: INFO: Number of nodes with available pods: 2 Jan 31 13:17:33.326: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8253, will wait for the garbage collector to delete the pods Jan 31 13:17:33.421: INFO: Deleting DaemonSet.extensions daemon-set took: 13.627402ms Jan 31 13:17:33.722: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.769692ms Jan 31 13:17:47.985: INFO: Number of nodes with available pods: 0 Jan 31 13:17:47.985: INFO: Number of running nodes: 0, number of available pods: 0 Jan 31 13:17:47.993: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8253/daemonsets","resourceVersion":"22563268"},"items":null} Jan 31 13:17:47.997: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8253/pods","resourceVersion":"22563268"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:17:48.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8253" for this suite. Jan 31 13:17:54.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:17:54.152: INFO: namespace daemonsets-8253 deletion completed in 6.136172139s • [SLOW TEST:64.212 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:17:54.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4b0dbb21-28de-42a3-9f5a-8b0befe49bee STEP: Creating a pod to test consume secrets Jan 31 13:17:54.309: INFO: Waiting up to 5m0s for pod "pod-secrets-05529321-ca19-4bb9-a069-af4e47ecbbe7" in namespace "secrets-8875" to be "success or failure" Jan 31 13:17:54.331: INFO: Pod "pod-secrets-05529321-ca19-4bb9-a069-af4e47ecbbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.667523ms Jan 31 13:17:56.340: INFO: Pod "pod-secrets-05529321-ca19-4bb9-a069-af4e47ecbbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031291809s Jan 31 13:17:58.355: INFO: Pod "pod-secrets-05529321-ca19-4bb9-a069-af4e47ecbbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045963007s Jan 31 13:18:00.362: INFO: Pod "pod-secrets-05529321-ca19-4bb9-a069-af4e47ecbbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05325667s Jan 31 13:18:02.374: INFO: Pod "pod-secrets-05529321-ca19-4bb9-a069-af4e47ecbbe7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065186248s STEP: Saw pod success Jan 31 13:18:02.374: INFO: Pod "pod-secrets-05529321-ca19-4bb9-a069-af4e47ecbbe7" satisfied condition "success or failure" Jan 31 13:18:02.380: INFO: Trying to get logs from node iruya-node pod pod-secrets-05529321-ca19-4bb9-a069-af4e47ecbbe7 container secret-volume-test: STEP: delete the pod Jan 31 13:18:02.432: INFO: Waiting for pod pod-secrets-05529321-ca19-4bb9-a069-af4e47ecbbe7 to disappear Jan 31 13:18:02.441: INFO: Pod pod-secrets-05529321-ca19-4bb9-a069-af4e47ecbbe7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:18:02.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8875" for this suite. Jan 31 13:18:08.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:18:08.624: INFO: namespace secrets-8875 deletion completed in 6.174462048s • [SLOW TEST:14.471 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:18:08.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0131 13:18:24.595713 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 31 13:18:24.596: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:18:24.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2521" for this suite. Jan 31 13:18:37.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:18:37.978: INFO: namespace gc-2521 deletion completed in 13.365643479s • [SLOW TEST:29.354 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:18:37.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 31 13:18:38.186: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5499cae-af79-44ae-8d74-6177f088870c" in namespace "projected-8654" to be "success or failure" Jan 31 13:18:38.199: INFO: Pod "downwardapi-volume-b5499cae-af79-44ae-8d74-6177f088870c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.888291ms Jan 31 13:18:40.213: INFO: Pod "downwardapi-volume-b5499cae-af79-44ae-8d74-6177f088870c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026602624s Jan 31 13:18:42.226: INFO: Pod "downwardapi-volume-b5499cae-af79-44ae-8d74-6177f088870c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039447801s Jan 31 13:18:44.236: INFO: Pod "downwardapi-volume-b5499cae-af79-44ae-8d74-6177f088870c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049822383s Jan 31 13:18:46.243: INFO: Pod "downwardapi-volume-b5499cae-af79-44ae-8d74-6177f088870c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056345847s Jan 31 13:18:48.265: INFO: Pod "downwardapi-volume-b5499cae-af79-44ae-8d74-6177f088870c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078940265s STEP: Saw pod success Jan 31 13:18:48.266: INFO: Pod "downwardapi-volume-b5499cae-af79-44ae-8d74-6177f088870c" satisfied condition "success or failure" Jan 31 13:18:48.276: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b5499cae-af79-44ae-8d74-6177f088870c container client-container: STEP: delete the pod Jan 31 13:18:48.345: INFO: Waiting for pod downwardapi-volume-b5499cae-af79-44ae-8d74-6177f088870c to disappear Jan 31 13:18:48.355: INFO: Pod downwardapi-volume-b5499cae-af79-44ae-8d74-6177f088870c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:18:48.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8654" for this suite. Jan 31 13:18:54.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:18:54.509: INFO: namespace projected-8654 deletion completed in 6.143434402s • [SLOW TEST:16.530 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:18:54.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0131 13:19:04.920059 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 31 13:19:04.920: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:19:04.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9556" for this suite. Jan 31 13:19:10.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:19:11.072: INFO: namespace gc-9556 deletion completed in 6.146990512s • [SLOW TEST:16.562 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:19:11.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-gl2z STEP: Creating a pod to test atomic-volume-subpath Jan 31 13:19:11.160: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-gl2z" in namespace "subpath-6175" to be "success or failure" Jan 31 13:19:11.172: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Pending", Reason="", readiness=false. Elapsed: 11.960029ms Jan 31 13:19:13.185: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024385574s Jan 31 13:19:15.194: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033843051s Jan 31 13:19:17.202: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04198319s Jan 31 13:19:19.212: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Running", Reason="", readiness=true. Elapsed: 8.051358968s Jan 31 13:19:21.223: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Running", Reason="", readiness=true. Elapsed: 10.062437828s Jan 31 13:19:23.233: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Running", Reason="", readiness=true. Elapsed: 12.07279595s Jan 31 13:19:25.244: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Running", Reason="", readiness=true. Elapsed: 14.083137181s Jan 31 13:19:27.256: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Running", Reason="", readiness=true. Elapsed: 16.095170391s Jan 31 13:19:29.270: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Running", Reason="", readiness=true. Elapsed: 18.109775191s Jan 31 13:19:31.281: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Running", Reason="", readiness=true. Elapsed: 20.120838777s Jan 31 13:19:33.290: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Running", Reason="", readiness=true. Elapsed: 22.129857081s Jan 31 13:19:35.537: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Running", Reason="", readiness=true. Elapsed: 24.377012215s Jan 31 13:19:37.546: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Running", Reason="", readiness=true. Elapsed: 26.385316878s Jan 31 13:19:39.560: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Running", Reason="", readiness=true. Elapsed: 28.399783478s Jan 31 13:19:41.570: INFO: Pod "pod-subpath-test-projected-gl2z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.409235723s STEP: Saw pod success Jan 31 13:19:41.570: INFO: Pod "pod-subpath-test-projected-gl2z" satisfied condition "success or failure" Jan 31 13:19:41.577: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-gl2z container test-container-subpath-projected-gl2z: STEP: delete the pod Jan 31 13:19:41.735: INFO: Waiting for pod pod-subpath-test-projected-gl2z to disappear Jan 31 13:19:41.742: INFO: Pod pod-subpath-test-projected-gl2z no longer exists STEP: Deleting pod pod-subpath-test-projected-gl2z Jan 31 13:19:41.743: INFO: Deleting pod "pod-subpath-test-projected-gl2z" in namespace "subpath-6175" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:19:41.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6175" for this suite. Jan 31 13:19:47.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:19:48.038: INFO: namespace subpath-6175 deletion completed in 6.284508476s • [SLOW TEST:36.965 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:19:48.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 31 13:19:56.760: INFO: Successfully updated pod "annotationupdatec0aaa576-c6f1-4081-bd00-69040133593a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:19:58.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7652" for this suite. Jan 31 13:20:20.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:20:20.955: INFO: namespace projected-7652 deletion completed in 22.139581431s • [SLOW TEST:32.917 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:20:20.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-1d72809f-7c7a-4ebb-8d99-da030df1092a STEP: Creating a pod to test consume configMaps Jan 31 13:20:21.075: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b5c99b67-6c0c-4f1e-883c-ff744156dc9b" in namespace "projected-718" to be "success or failure" Jan 31 13:20:21.100: INFO: Pod "pod-projected-configmaps-b5c99b67-6c0c-4f1e-883c-ff744156dc9b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.949848ms Jan 31 13:20:23.112: INFO: Pod "pod-projected-configmaps-b5c99b67-6c0c-4f1e-883c-ff744156dc9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036835986s Jan 31 13:20:25.152: INFO: Pod "pod-projected-configmaps-b5c99b67-6c0c-4f1e-883c-ff744156dc9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076316144s Jan 31 13:20:27.160: INFO: Pod "pod-projected-configmaps-b5c99b67-6c0c-4f1e-883c-ff744156dc9b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084940691s Jan 31 13:20:29.169: INFO: Pod "pod-projected-configmaps-b5c99b67-6c0c-4f1e-883c-ff744156dc9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093722782s STEP: Saw pod success Jan 31 13:20:29.169: INFO: Pod "pod-projected-configmaps-b5c99b67-6c0c-4f1e-883c-ff744156dc9b" satisfied condition "success or failure" Jan 31 13:20:29.175: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b5c99b67-6c0c-4f1e-883c-ff744156dc9b container projected-configmap-volume-test: STEP: delete the pod Jan 31 13:20:29.250: INFO: Waiting for pod pod-projected-configmaps-b5c99b67-6c0c-4f1e-883c-ff744156dc9b to disappear Jan 31 13:20:29.261: INFO: Pod pod-projected-configmaps-b5c99b67-6c0c-4f1e-883c-ff744156dc9b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:20:29.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-718" for this suite. Jan 31 13:20:35.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:20:35.516: INFO: namespace projected-718 deletion completed in 6.237362466s • [SLOW TEST:14.560 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:20:35.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 31 13:20:35.894: INFO: Number of nodes with available pods: 0 Jan 31 13:20:35.894: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:20:37.071: INFO: Number of nodes with available pods: 0 Jan 31 13:20:37.071: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:20:38.600: INFO: Number of nodes with available pods: 0 Jan 31 13:20:38.600: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:20:39.220: INFO: Number of nodes with available pods: 0 Jan 31 13:20:39.220: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:20:39.924: INFO: Number of nodes with available pods: 0 Jan 31 13:20:39.924: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:20:40.938: INFO: Number of nodes with available pods: 0 Jan 31 13:20:40.938: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:20:43.914: INFO: Number of nodes with available pods: 0 Jan 31 13:20:43.914: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:20:44.945: INFO: Number of nodes with available pods: 0 Jan 31 13:20:44.945: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:20:45.924: INFO: Number of nodes with available pods: 0 Jan 31 13:20:45.924: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:20:46.988: INFO: Number of nodes with available pods: 2 Jan 31 13:20:46.988: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 31 13:20:47.066: INFO: Number of nodes with available pods: 2 Jan 31 13:20:47.066: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9739, will wait for the garbage collector to delete the pods Jan 31 13:20:47.325: INFO: Deleting DaemonSet.extensions daemon-set took: 60.408032ms Jan 31 13:20:48.826: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.500886218s Jan 31 13:20:56.737: INFO: Number of nodes with available pods: 0 Jan 31 13:20:56.737: INFO: Number of running nodes: 0, number of available pods: 0 Jan 31 13:20:56.741: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9739/daemonsets","resourceVersion":"22563890"},"items":null} Jan 31 13:20:56.744: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9739/pods","resourceVersion":"22563890"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:20:56.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9739" for this suite. Jan 31 13:21:02.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:21:02.956: INFO: namespace daemonsets-9739 deletion completed in 6.19591506s • [SLOW TEST:27.440 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:21:02.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jan 31 13:21:03.111: INFO: Waiting up to 5m0s for pod "client-containers-5a4e1a28-705e-4a6c-84db-d030dab15e5a" in namespace "containers-7484" to be "success or failure" Jan 31 13:21:03.145: INFO: Pod "client-containers-5a4e1a28-705e-4a6c-84db-d030dab15e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 33.590849ms Jan 31 13:21:05.165: INFO: Pod "client-containers-5a4e1a28-705e-4a6c-84db-d030dab15e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052760745s Jan 31 13:21:07.171: INFO: Pod "client-containers-5a4e1a28-705e-4a6c-84db-d030dab15e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059552209s Jan 31 13:21:09.186: INFO: Pod "client-containers-5a4e1a28-705e-4a6c-84db-d030dab15e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074474693s Jan 31 13:21:11.197: INFO: Pod "client-containers-5a4e1a28-705e-4a6c-84db-d030dab15e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08531033s Jan 31 13:21:13.208: INFO: Pod "client-containers-5a4e1a28-705e-4a6c-84db-d030dab15e5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096210062s STEP: Saw pod success Jan 31 13:21:13.208: INFO: Pod "client-containers-5a4e1a28-705e-4a6c-84db-d030dab15e5a" satisfied condition "success or failure" Jan 31 13:21:13.214: INFO: Trying to get logs from node iruya-node pod client-containers-5a4e1a28-705e-4a6c-84db-d030dab15e5a container test-container: STEP: delete the pod Jan 31 13:21:13.297: INFO: Waiting for pod client-containers-5a4e1a28-705e-4a6c-84db-d030dab15e5a to disappear Jan 31 13:21:13.361: INFO: Pod client-containers-5a4e1a28-705e-4a6c-84db-d030dab15e5a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:21:13.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7484" for this suite. Jan 31 13:21:19.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:21:19.574: INFO: namespace containers-7484 deletion completed in 6.201535945s • [SLOW TEST:16.616 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:21:19.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Jan 31 13:21:19.717: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5775" to be "success or failure" Jan 31 13:21:19.733: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.142448ms Jan 31 13:21:21.742: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024614318s Jan 31 13:21:23.752: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035008853s Jan 31 13:21:25.761: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044427219s Jan 31 13:21:27.836: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118688929s Jan 31 13:21:29.861: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.144046166s Jan 31 13:21:31.879: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.162473334s STEP: Saw pod success Jan 31 13:21:31.880: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 31 13:21:31.887: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 31 13:21:32.054: INFO: Waiting for pod pod-host-path-test to disappear Jan 31 13:21:32.093: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:21:32.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5775" for this suite. Jan 31 13:21:38.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:21:38.345: INFO: namespace hostpath-5775 deletion completed in 6.244743331s • [SLOW TEST:18.770 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:21:38.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 31 13:21:38.426: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e7e1715-881d-42e7-90fb-b1c11406c72d" in namespace "projected-7377" to be "success or failure" Jan 31 13:21:38.514: INFO: Pod "downwardapi-volume-0e7e1715-881d-42e7-90fb-b1c11406c72d": Phase="Pending", Reason="", readiness=false. Elapsed: 87.522055ms Jan 31 13:21:40.540: INFO: Pod "downwardapi-volume-0e7e1715-881d-42e7-90fb-b1c11406c72d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113458166s Jan 31 13:21:42.596: INFO: Pod "downwardapi-volume-0e7e1715-881d-42e7-90fb-b1c11406c72d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168867997s Jan 31 13:21:44.619: INFO: Pod "downwardapi-volume-0e7e1715-881d-42e7-90fb-b1c11406c72d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19273431s Jan 31 13:21:46.629: INFO: Pod "downwardapi-volume-0e7e1715-881d-42e7-90fb-b1c11406c72d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.202585149s STEP: Saw pod success Jan 31 13:21:46.629: INFO: Pod "downwardapi-volume-0e7e1715-881d-42e7-90fb-b1c11406c72d" satisfied condition "success or failure" Jan 31 13:21:46.639: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0e7e1715-881d-42e7-90fb-b1c11406c72d container client-container: STEP: delete the pod Jan 31 13:21:46.688: INFO: Waiting for pod downwardapi-volume-0e7e1715-881d-42e7-90fb-b1c11406c72d to disappear Jan 31 13:21:46.698: INFO: Pod downwardapi-volume-0e7e1715-881d-42e7-90fb-b1c11406c72d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:21:46.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7377" for this suite. Jan 31 13:21:52.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:21:52.942: INFO: namespace projected-7377 deletion completed in 6.237519491s • [SLOW TEST:14.595 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:21:52.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-76162c4f-8a9e-47da-a5c4-eebb917cea44 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-76162c4f-8a9e-47da-a5c4-eebb917cea44 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:23:19.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7907" for this suite. Jan 31 13:23:41.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:23:41.647: INFO: namespace projected-7907 deletion completed in 22.163665215s • [SLOW TEST:108.705 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:23:41.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 31 13:23:41.785: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2634,SelfLink:/api/v1/namespaces/watch-2634/configmaps/e2e-watch-test-configmap-a,UID:de74e99d-4dd0-470d-b127-cbad9f931020,ResourceVersion:22564234,Generation:0,CreationTimestamp:2020-01-31 13:23:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 31 13:23:41.785: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2634,SelfLink:/api/v1/namespaces/watch-2634/configmaps/e2e-watch-test-configmap-a,UID:de74e99d-4dd0-470d-b127-cbad9f931020,ResourceVersion:22564234,Generation:0,CreationTimestamp:2020-01-31 13:23:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 31 13:23:51.814: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2634,SelfLink:/api/v1/namespaces/watch-2634/configmaps/e2e-watch-test-configmap-a,UID:de74e99d-4dd0-470d-b127-cbad9f931020,ResourceVersion:22564248,Generation:0,CreationTimestamp:2020-01-31 13:23:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 31 13:23:51.815: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2634,SelfLink:/api/v1/namespaces/watch-2634/configmaps/e2e-watch-test-configmap-a,UID:de74e99d-4dd0-470d-b127-cbad9f931020,ResourceVersion:22564248,Generation:0,CreationTimestamp:2020-01-31 13:23:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 31 13:24:01.834: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2634,SelfLink:/api/v1/namespaces/watch-2634/configmaps/e2e-watch-test-configmap-a,UID:de74e99d-4dd0-470d-b127-cbad9f931020,ResourceVersion:22564262,Generation:0,CreationTimestamp:2020-01-31 13:23:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 31 13:24:01.835: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2634,SelfLink:/api/v1/namespaces/watch-2634/configmaps/e2e-watch-test-configmap-a,UID:de74e99d-4dd0-470d-b127-cbad9f931020,ResourceVersion:22564262,Generation:0,CreationTimestamp:2020-01-31 13:23:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 31 13:24:11.861: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2634,SelfLink:/api/v1/namespaces/watch-2634/configmaps/e2e-watch-test-configmap-a,UID:de74e99d-4dd0-470d-b127-cbad9f931020,ResourceVersion:22564276,Generation:0,CreationTimestamp:2020-01-31 13:23:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 31 13:24:11.863: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2634,SelfLink:/api/v1/namespaces/watch-2634/configmaps/e2e-watch-test-configmap-a,UID:de74e99d-4dd0-470d-b127-cbad9f931020,ResourceVersion:22564276,Generation:0,CreationTimestamp:2020-01-31 13:23:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 31 13:24:21.884: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2634,SelfLink:/api/v1/namespaces/watch-2634/configmaps/e2e-watch-test-configmap-b,UID:a7dccf28-a702-4d87-8c79-d1fc4f6ebe36,ResourceVersion:22564292,Generation:0,CreationTimestamp:2020-01-31 13:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 31 13:24:21.884: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2634,SelfLink:/api/v1/namespaces/watch-2634/configmaps/e2e-watch-test-configmap-b,UID:a7dccf28-a702-4d87-8c79-d1fc4f6ebe36,ResourceVersion:22564292,Generation:0,CreationTimestamp:2020-01-31 13:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 31 13:24:31.895: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2634,SelfLink:/api/v1/namespaces/watch-2634/configmaps/e2e-watch-test-configmap-b,UID:a7dccf28-a702-4d87-8c79-d1fc4f6ebe36,ResourceVersion:22564306,Generation:0,CreationTimestamp:2020-01-31 13:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 31 13:24:31.896: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2634,SelfLink:/api/v1/namespaces/watch-2634/configmaps/e2e-watch-test-configmap-b,UID:a7dccf28-a702-4d87-8c79-d1fc4f6ebe36,ResourceVersion:22564306,Generation:0,CreationTimestamp:2020-01-31 13:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:24:41.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2634" for this suite. Jan 31 13:24:47.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:24:48.116: INFO: namespace watch-2634 deletion completed in 6.205822462s • [SLOW TEST:66.468 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:24:48.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 31 13:24:55.413: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:24:55.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7159" for this suite. Jan 31 13:25:01.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:25:01.714: INFO: namespace container-runtime-7159 deletion completed in 6.197350172s • [SLOW TEST:13.598 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:25:01.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 31 13:25:01.855: INFO: Creating deployment "test-recreate-deployment" Jan 31 13:25:01.887: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 31 13:25:01.938: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 31 13:25:03.962: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 31 13:25:03.967: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073901, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073901, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073902, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073901, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 13:25:05.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073901, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073901, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073902, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073901, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 13:25:07.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073901, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073901, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073902, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073901, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 13:25:09.977: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073901, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073901, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073902, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716073901, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 13:25:11.977: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 31 13:25:11.990: INFO: Updating deployment test-recreate-deployment Jan 31 13:25:11.991: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 31 13:25:12.410: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-6799,SelfLink:/apis/apps/v1/namespaces/deployment-6799/deployments/test-recreate-deployment,UID:f1763696-1744-4055-87a2-d4b45bec2c0c,ResourceVersion:22564428,Generation:2,CreationTimestamp:2020-01-31 13:25:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-31 13:25:12 +0000 UTC 2020-01-31 13:25:12 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-31 13:25:12 +0000 UTC 2020-01-31 13:25:01 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jan 31 13:25:12.430: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-6799,SelfLink:/apis/apps/v1/namespaces/deployment-6799/replicasets/test-recreate-deployment-5c8c9cc69d,UID:39cc9200-b9a2-46e7-b2ad-4c4717578e99,ResourceVersion:22564425,Generation:1,CreationTimestamp:2020-01-31 13:25:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment f1763696-1744-4055-87a2-d4b45bec2c0c 0xc000afdeb7 0xc000afdeb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 31 13:25:12.430: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 31 13:25:12.430: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-6799,SelfLink:/apis/apps/v1/namespaces/deployment-6799/replicasets/test-recreate-deployment-6df85df6b9,UID:7044ea68-b861-4cbd-883c-20a40e7fd16a,ResourceVersion:22564417,Generation:2,CreationTimestamp:2020-01-31 13:25:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment f1763696-1744-4055-87a2-d4b45bec2c0c 0xc000afdfb7 0xc000afdfb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 31 13:25:12.563: INFO: Pod "test-recreate-deployment-5c8c9cc69d-bstnv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-bstnv,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-6799,SelfLink:/api/v1/namespaces/deployment-6799/pods/test-recreate-deployment-5c8c9cc69d-bstnv,UID:8c6498ad-8d01-415e-9fbd-f66fbb4365ae,ResourceVersion:22564429,Generation:0,CreationTimestamp:2020-01-31 13:25:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 39cc9200-b9a2-46e7-b2ad-4c4717578e99 0xc002982cf7 0xc002982cf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nkqvw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nkqvw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nkqvw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002982d70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002982d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 13:25:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 13:25:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 13:25:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 13:25:12 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-31 13:25:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:25:12.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6799" for this suite. Jan 31 13:25:20.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:25:20.771: INFO: namespace deployment-6799 deletion completed in 8.189879032s • [SLOW TEST:19.056 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:25:20.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jan 31 13:25:20.948: INFO: Waiting up to 5m0s for pod "var-expansion-d316238b-5dc5-4943-8821-1dccf13f3b75" in namespace "var-expansion-2898" to be "success or failure" Jan 31 13:25:20.966: INFO: Pod "var-expansion-d316238b-5dc5-4943-8821-1dccf13f3b75": Phase="Pending", Reason="", readiness=false. Elapsed: 18.308735ms Jan 31 13:25:22.985: INFO: Pod "var-expansion-d316238b-5dc5-4943-8821-1dccf13f3b75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036957053s Jan 31 13:25:24.994: INFO: Pod "var-expansion-d316238b-5dc5-4943-8821-1dccf13f3b75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045955514s Jan 31 13:25:27.015: INFO: Pod "var-expansion-d316238b-5dc5-4943-8821-1dccf13f3b75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066468338s Jan 31 13:25:29.023: INFO: Pod "var-expansion-d316238b-5dc5-4943-8821-1dccf13f3b75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075289071s STEP: Saw pod success Jan 31 13:25:29.024: INFO: Pod "var-expansion-d316238b-5dc5-4943-8821-1dccf13f3b75" satisfied condition "success or failure" Jan 31 13:25:29.026: INFO: Trying to get logs from node iruya-node pod var-expansion-d316238b-5dc5-4943-8821-1dccf13f3b75 container dapi-container: STEP: delete the pod Jan 31 13:25:29.142: INFO: Waiting for pod var-expansion-d316238b-5dc5-4943-8821-1dccf13f3b75 to disappear Jan 31 13:25:29.150: INFO: Pod var-expansion-d316238b-5dc5-4943-8821-1dccf13f3b75 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:25:29.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2898" for this suite. Jan 31 13:25:35.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:25:35.357: INFO: namespace var-expansion-2898 deletion completed in 6.201957952s • [SLOW TEST:14.586 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:25:35.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 31 13:25:35.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7895' Jan 31 13:25:35.560: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 31 13:25:35.561: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 31 13:25:35.642: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-k9qzk] Jan 31 13:25:35.642: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-k9qzk" in namespace "kubectl-7895" to be "running and ready" Jan 31 13:25:35.670: INFO: Pod "e2e-test-nginx-rc-k9qzk": Phase="Pending", Reason="", readiness=false. Elapsed: 27.731532ms Jan 31 13:25:37.686: INFO: Pod "e2e-test-nginx-rc-k9qzk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043490195s Jan 31 13:25:39.761: INFO: Pod "e2e-test-nginx-rc-k9qzk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118592448s Jan 31 13:25:41.771: INFO: Pod "e2e-test-nginx-rc-k9qzk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128551398s Jan 31 13:25:43.788: INFO: Pod "e2e-test-nginx-rc-k9qzk": Phase="Running", Reason="", readiness=true. Elapsed: 8.14499972s Jan 31 13:25:43.788: INFO: Pod "e2e-test-nginx-rc-k9qzk" satisfied condition "running and ready" Jan 31 13:25:43.788: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-k9qzk] Jan 31 13:25:43.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-7895' Jan 31 13:25:44.080: INFO: stderr: "" Jan 31 13:25:44.080: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jan 31 13:25:44.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7895' Jan 31 13:25:44.197: INFO: stderr: "" Jan 31 13:25:44.197: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:25:44.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7895" for this suite. Jan 31 13:26:06.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:26:06.365: INFO: namespace kubectl-7895 deletion completed in 22.162458303s • [SLOW TEST:31.007 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:26:06.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 31 13:26:06.502: INFO: Waiting up to 5m0s for pod "downward-api-3b3d32a8-c41e-45f2-9277-7af44c9024cf" in namespace "downward-api-8165" to be "success or failure" Jan 31 13:26:06.517: INFO: Pod "downward-api-3b3d32a8-c41e-45f2-9277-7af44c9024cf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.400188ms Jan 31 13:26:08.543: INFO: Pod "downward-api-3b3d32a8-c41e-45f2-9277-7af44c9024cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040376188s Jan 31 13:26:10.559: INFO: Pod "downward-api-3b3d32a8-c41e-45f2-9277-7af44c9024cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05579513s Jan 31 13:26:12.586: INFO: Pod "downward-api-3b3d32a8-c41e-45f2-9277-7af44c9024cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083165006s Jan 31 13:26:14.613: INFO: Pod "downward-api-3b3d32a8-c41e-45f2-9277-7af44c9024cf": Phase="Running", Reason="", readiness=true. Elapsed: 8.10983162s Jan 31 13:26:16.619: INFO: Pod "downward-api-3b3d32a8-c41e-45f2-9277-7af44c9024cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116540041s STEP: Saw pod success Jan 31 13:26:16.620: INFO: Pod "downward-api-3b3d32a8-c41e-45f2-9277-7af44c9024cf" satisfied condition "success or failure" Jan 31 13:26:16.630: INFO: Trying to get logs from node iruya-node pod downward-api-3b3d32a8-c41e-45f2-9277-7af44c9024cf container dapi-container: STEP: delete the pod Jan 31 13:26:16.816: INFO: Waiting for pod downward-api-3b3d32a8-c41e-45f2-9277-7af44c9024cf to disappear Jan 31 13:26:16.830: INFO: Pod downward-api-3b3d32a8-c41e-45f2-9277-7af44c9024cf no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:26:16.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8165" for this suite. Jan 31 13:26:22.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:26:23.038: INFO: namespace downward-api-8165 deletion completed in 6.201332515s • [SLOW TEST:16.671 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:26:23.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 31 13:26:23.148: INFO: Waiting up to 5m0s for pod "downwardapi-volume-772b481a-8b41-4158-9dba-f3fa146187a1" in namespace "projected-8682" to be "success or failure" Jan 31 13:26:23.158: INFO: Pod "downwardapi-volume-772b481a-8b41-4158-9dba-f3fa146187a1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.400934ms Jan 31 13:26:25.164: INFO: Pod "downwardapi-volume-772b481a-8b41-4158-9dba-f3fa146187a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016187246s Jan 31 13:26:27.196: INFO: Pod "downwardapi-volume-772b481a-8b41-4158-9dba-f3fa146187a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047315567s Jan 31 13:26:29.206: INFO: Pod "downwardapi-volume-772b481a-8b41-4158-9dba-f3fa146187a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057429709s Jan 31 13:26:31.232: INFO: Pod "downwardapi-volume-772b481a-8b41-4158-9dba-f3fa146187a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083385025s STEP: Saw pod success Jan 31 13:26:31.232: INFO: Pod "downwardapi-volume-772b481a-8b41-4158-9dba-f3fa146187a1" satisfied condition "success or failure" Jan 31 13:26:31.237: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-772b481a-8b41-4158-9dba-f3fa146187a1 container client-container: STEP: delete the pod Jan 31 13:26:31.432: INFO: Waiting for pod downwardapi-volume-772b481a-8b41-4158-9dba-f3fa146187a1 to disappear Jan 31 13:26:31.441: INFO: Pod downwardapi-volume-772b481a-8b41-4158-9dba-f3fa146187a1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:26:31.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8682" for this suite. Jan 31 13:26:37.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:26:37.603: INFO: namespace projected-8682 deletion completed in 6.156618545s • [SLOW TEST:14.565 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:26:37.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 31 13:26:46.424: INFO: Successfully updated pod "labelsupdate5be37848-a023-4d74-8404-c455eaa46a6e" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:26:48.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4022" for this suite. Jan 31 13:27:10.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:27:10.755: INFO: namespace projected-4022 deletion completed in 22.222958948s • [SLOW TEST:33.152 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:27:10.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-9e4f8f5f-e169-4414-a7a9-c422d7adc4e2 STEP: Creating a pod to test consume secrets Jan 31 13:27:10.866: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-81132a6d-819b-4343-94a9-09281ca5b792" in namespace "projected-8126" to be "success or failure" Jan 31 13:27:10.878: INFO: Pod "pod-projected-secrets-81132a6d-819b-4343-94a9-09281ca5b792": Phase="Pending", Reason="", readiness=false. Elapsed: 11.670939ms Jan 31 13:27:12.888: INFO: Pod "pod-projected-secrets-81132a6d-819b-4343-94a9-09281ca5b792": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021812257s Jan 31 13:27:14.902: INFO: Pod "pod-projected-secrets-81132a6d-819b-4343-94a9-09281ca5b792": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035938382s Jan 31 13:27:16.933: INFO: Pod "pod-projected-secrets-81132a6d-819b-4343-94a9-09281ca5b792": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067339488s Jan 31 13:27:18.941: INFO: Pod "pod-projected-secrets-81132a6d-819b-4343-94a9-09281ca5b792": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075401118s Jan 31 13:27:20.950: INFO: Pod "pod-projected-secrets-81132a6d-819b-4343-94a9-09281ca5b792": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084008575s STEP: Saw pod success Jan 31 13:27:20.950: INFO: Pod "pod-projected-secrets-81132a6d-819b-4343-94a9-09281ca5b792" satisfied condition "success or failure" Jan 31 13:27:20.953: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-81132a6d-819b-4343-94a9-09281ca5b792 container secret-volume-test: STEP: delete the pod Jan 31 13:27:21.001: INFO: Waiting for pod pod-projected-secrets-81132a6d-819b-4343-94a9-09281ca5b792 to disappear Jan 31 13:27:21.096: INFO: Pod pod-projected-secrets-81132a6d-819b-4343-94a9-09281ca5b792 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:27:21.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8126" for this suite. Jan 31 13:27:27.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:27:27.340: INFO: namespace projected-8126 deletion completed in 6.237794993s • [SLOW TEST:16.583 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:27:27.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 31 13:27:33.484: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-511ed37f-b5e1-422a-a96d-7ab0cf93f414,GenerateName:,Namespace:events-2549,SelfLink:/api/v1/namespaces/events-2549/pods/send-events-511ed37f-b5e1-422a-a96d-7ab0cf93f414,UID:87f1cb98-727e-430e-ae45-62ba448f04dc,ResourceVersion:22564805,Generation:0,CreationTimestamp:2020-01-31 13:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 421019597,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kgxn9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kgxn9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-kgxn9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00285b2b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00285b2d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 13:27:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 13:27:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 13:27:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 13:27:27 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-31 13:27:27 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-31 13:27:32 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://bcc6a7a168bf1eef6fcc2a3e50bdcb9d19255899a8eb38e10367f830ee118ee3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jan 31 13:27:35.498: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 31 13:27:37.511: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:27:37.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2549" for this suite. Jan 31 13:28:17.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:28:17.874: INFO: namespace events-2549 deletion completed in 40.254219299s • [SLOW TEST:50.533 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:28:17.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Jan 31 13:28:26.012: INFO: Pod pod-hostip-0f52d881-dfc0-4afe-9eb5-9f7951170001 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:28:26.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-360" for this suite. Jan 31 13:28:48.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:28:48.200: INFO: namespace pods-360 deletion completed in 22.17396434s • [SLOW TEST:30.325 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:28:48.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 31 13:28:48.321: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b874b344-2933-4285-8f63-3dc6a5479d3c" in namespace "projected-2969" to be "success or failure" Jan 31 13:28:48.335: INFO: Pod "downwardapi-volume-b874b344-2933-4285-8f63-3dc6a5479d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.978889ms Jan 31 13:28:50.348: INFO: Pod "downwardapi-volume-b874b344-2933-4285-8f63-3dc6a5479d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026414663s Jan 31 13:28:52.388: INFO: Pod "downwardapi-volume-b874b344-2933-4285-8f63-3dc6a5479d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066919464s Jan 31 13:28:54.400: INFO: Pod "downwardapi-volume-b874b344-2933-4285-8f63-3dc6a5479d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078600073s Jan 31 13:28:56.424: INFO: Pod "downwardapi-volume-b874b344-2933-4285-8f63-3dc6a5479d3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102062356s STEP: Saw pod success Jan 31 13:28:56.424: INFO: Pod "downwardapi-volume-b874b344-2933-4285-8f63-3dc6a5479d3c" satisfied condition "success or failure" Jan 31 13:28:56.431: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b874b344-2933-4285-8f63-3dc6a5479d3c container client-container: STEP: delete the pod Jan 31 13:28:56.716: INFO: Waiting for pod downwardapi-volume-b874b344-2933-4285-8f63-3dc6a5479d3c to disappear Jan 31 13:28:56.729: INFO: Pod downwardapi-volume-b874b344-2933-4285-8f63-3dc6a5479d3c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:28:56.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2969" for this suite. Jan 31 13:29:02.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:29:02.880: INFO: namespace projected-2969 deletion completed in 6.143408808s • [SLOW TEST:14.679 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:29:02.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9984 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jan 31 13:29:02.980: INFO: Found 0 stateful pods, waiting for 3 Jan 31 13:29:12.988: INFO: Found 2 stateful pods, waiting for 3 Jan 31 13:29:22.990: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 31 13:29:22.990: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 31 13:29:22.990: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 31 13:29:32.991: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 31 13:29:32.991: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 31 13:29:32.991: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 31 13:29:33.037: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 31 13:29:43.104: INFO: Updating stateful set ss2 Jan 31 13:29:43.330: INFO: Waiting for Pod statefulset-9984/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 31 13:29:53.350: INFO: Waiting for Pod statefulset-9984/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 31 13:30:03.653: INFO: Found 2 stateful pods, waiting for 3 Jan 31 13:30:13.664: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 31 13:30:13.664: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 31 13:30:13.664: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 31 13:30:23.675: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 31 13:30:23.675: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 31 13:30:23.675: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 31 13:30:23.721: INFO: Updating stateful set ss2 Jan 31 13:30:23.737: INFO: Waiting for Pod statefulset-9984/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 31 13:30:33.780: INFO: Updating stateful set ss2 Jan 31 13:30:33.845: INFO: Waiting for StatefulSet statefulset-9984/ss2 to complete update Jan 31 13:30:33.845: INFO: Waiting for Pod statefulset-9984/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 31 13:30:43.884: INFO: Waiting for StatefulSet statefulset-9984/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 31 13:30:53.883: INFO: Deleting all statefulset in ns statefulset-9984 Jan 31 13:30:53.889: INFO: Scaling statefulset ss2 to 0 Jan 31 13:31:23.931: INFO: Waiting for statefulset status.replicas updated to 0 Jan 31 13:31:23.939: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:31:23.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9984" for this suite. Jan 31 13:31:32.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:31:32.187: INFO: namespace statefulset-9984 deletion completed in 8.217986821s • [SLOW TEST:149.306 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:31:32.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 31 13:31:32.348: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 31 13:31:32.413: INFO: Number of nodes with available pods: 0 Jan 31 13:31:32.413: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 31 13:31:32.459: INFO: Number of nodes with available pods: 0 Jan 31 13:31:32.459: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:34.412: INFO: Number of nodes with available pods: 0 Jan 31 13:31:34.412: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:34.957: INFO: Number of nodes with available pods: 0 Jan 31 13:31:34.957: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:35.469: INFO: Number of nodes with available pods: 0 Jan 31 13:31:35.469: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:36.476: INFO: Number of nodes with available pods: 0 Jan 31 13:31:36.476: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:37.469: INFO: Number of nodes with available pods: 0 Jan 31 13:31:37.469: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:38.471: INFO: Number of nodes with available pods: 0 Jan 31 13:31:38.472: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:39.472: INFO: Number of nodes with available pods: 0 Jan 31 13:31:39.472: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:40.468: INFO: Number of nodes with available pods: 0 Jan 31 13:31:40.468: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:41.469: INFO: Number of nodes with available pods: 0 Jan 31 13:31:41.469: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:42.472: INFO: Number of nodes with available pods: 1 Jan 31 13:31:42.472: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 31 13:31:42.562: INFO: Number of nodes with available pods: 1 Jan 31 13:31:42.562: INFO: Number of running nodes: 0, number of available pods: 1 Jan 31 13:31:43.573: INFO: Number of nodes with available pods: 0 Jan 31 13:31:43.573: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 31 13:31:43.599: INFO: Number of nodes with available pods: 0 Jan 31 13:31:43.599: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:44.641: INFO: Number of nodes with available pods: 0 Jan 31 13:31:44.641: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:45.614: INFO: Number of nodes with available pods: 0 Jan 31 13:31:45.614: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:46.617: INFO: Number of nodes with available pods: 0 Jan 31 13:31:46.617: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:47.608: INFO: Number of nodes with available pods: 0 Jan 31 13:31:47.608: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:48.611: INFO: Number of nodes with available pods: 0 Jan 31 13:31:48.612: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:49.610: INFO: Number of nodes with available pods: 0 Jan 31 13:31:49.610: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:50.636: INFO: Number of nodes with available pods: 0 Jan 31 13:31:50.637: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:51.614: INFO: Number of nodes with available pods: 0 Jan 31 13:31:51.614: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:52.613: INFO: Number of nodes with available pods: 0 Jan 31 13:31:52.613: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:53.615: INFO: Number of nodes with available pods: 0 Jan 31 13:31:53.616: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:54.619: INFO: Number of nodes with available pods: 0 Jan 31 13:31:54.619: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:55.612: INFO: Number of nodes with available pods: 0 Jan 31 13:31:55.612: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:56.660: INFO: Number of nodes with available pods: 0 Jan 31 13:31:56.660: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:57.610: INFO: Number of nodes with available pods: 0 Jan 31 13:31:57.610: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:58.616: INFO: Number of nodes with available pods: 0 Jan 31 13:31:58.616: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:31:59.616: INFO: Number of nodes with available pods: 0 Jan 31 13:31:59.617: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:32:00.617: INFO: Number of nodes with available pods: 0 Jan 31 13:32:00.618: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:32:01.630: INFO: Number of nodes with available pods: 0 Jan 31 13:32:01.630: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:32:02.606: INFO: Number of nodes with available pods: 0 Jan 31 13:32:02.606: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:32:03.620: INFO: Number of nodes with available pods: 1 Jan 31 13:32:03.620: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8262, will wait for the garbage collector to delete the pods Jan 31 13:32:03.700: INFO: Deleting DaemonSet.extensions daemon-set took: 12.859245ms Jan 31 13:32:04.001: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.977875ms Jan 31 13:32:16.614: INFO: Number of nodes with available pods: 0 Jan 31 13:32:16.614: INFO: Number of running nodes: 0, number of available pods: 0 Jan 31 13:32:16.620: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8262/daemonsets","resourceVersion":"22565557"},"items":null} Jan 31 13:32:16.624: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8262/pods","resourceVersion":"22565557"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:32:16.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8262" for this suite. Jan 31 13:32:22.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:32:22.892: INFO: namespace daemonsets-8262 deletion completed in 6.204802796s • [SLOW TEST:50.705 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:32:22.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-j98s STEP: Creating a pod to test atomic-volume-subpath Jan 31 13:32:23.065: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-j98s" in namespace "subpath-9650" to be "success or failure" Jan 31 13:32:23.076: INFO: Pod "pod-subpath-test-downwardapi-j98s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.110736ms Jan 31 13:32:25.087: INFO: Pod "pod-subpath-test-downwardapi-j98s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021301429s Jan 31 13:32:27.097: INFO: Pod "pod-subpath-test-downwardapi-j98s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031515966s Jan 31 13:32:29.105: INFO: Pod "pod-subpath-test-downwardapi-j98s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038824653s Jan 31 13:32:31.138: INFO: Pod "pod-subpath-test-downwardapi-j98s": Phase="Running", Reason="", readiness=true. Elapsed: 8.071556071s Jan 31 13:32:33.148: INFO: Pod "pod-subpath-test-downwardapi-j98s": Phase="Running", Reason="", readiness=true. Elapsed: 10.082141714s Jan 31 13:32:35.163: INFO: Pod "pod-subpath-test-downwardapi-j98s": Phase="Running", Reason="", readiness=true. Elapsed: 12.096965042s Jan 31 13:32:37.172: INFO: Pod "pod-subpath-test-downwardapi-j98s": Phase="Running", Reason="", readiness=true. Elapsed: 14.10610989s Jan 31 13:32:39.184: INFO: Pod "pod-subpath-test-downwardapi-j98s": Phase="Running", Reason="", readiness=true. Elapsed: 16.117564659s Jan 31 13:32:41.229: INFO: Pod "pod-subpath-test-downwardapi-j98s": Phase="Running", Reason="", readiness=true. Elapsed: 18.162647525s Jan 31 13:32:43.247: INFO: Pod "pod-subpath-test-downwardapi-j98s": Phase="Running", Reason="", readiness=true. Elapsed: 20.180992575s Jan 31 13:32:45.260: INFO: Pod "pod-subpath-test-downwardapi-j98s": Phase="Running", Reason="", readiness=true. Elapsed: 22.193632715s Jan 31 13:32:47.571: INFO: Pod "pod-subpath-test-downwardapi-j98s": Phase="Running", Reason="", readiness=true. Elapsed: 24.50460707s Jan 31 13:32:49.581: INFO: Pod "pod-subpath-test-downwardapi-j98s": Phase="Running", Reason="", readiness=true. Elapsed: 26.514540662s Jan 31 13:32:51.587: INFO: Pod "pod-subpath-test-downwardapi-j98s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.521514983s STEP: Saw pod success Jan 31 13:32:51.588: INFO: Pod "pod-subpath-test-downwardapi-j98s" satisfied condition "success or failure" Jan 31 13:32:51.591: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-j98s container test-container-subpath-downwardapi-j98s: STEP: delete the pod Jan 31 13:32:51.890: INFO: Waiting for pod pod-subpath-test-downwardapi-j98s to disappear Jan 31 13:32:51.905: INFO: Pod pod-subpath-test-downwardapi-j98s no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-j98s Jan 31 13:32:51.906: INFO: Deleting pod "pod-subpath-test-downwardapi-j98s" in namespace "subpath-9650" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:32:51.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9650" for this suite. Jan 31 13:32:57.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:32:58.047: INFO: namespace subpath-9650 deletion completed in 6.130289796s • [SLOW TEST:35.154 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:32:58.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3449/configmap-test-ec71a9e9-2000-47d7-b548-061db0b080c1 STEP: Creating a pod to test consume configMaps Jan 31 13:32:58.163: INFO: Waiting up to 5m0s for pod "pod-configmaps-c1ac58f0-da89-4fe5-9f89-3afaf70f3587" in namespace "configmap-3449" to be "success or failure" Jan 31 13:32:58.168: INFO: Pod "pod-configmaps-c1ac58f0-da89-4fe5-9f89-3afaf70f3587": Phase="Pending", Reason="", readiness=false. Elapsed: 5.180123ms Jan 31 13:33:00.176: INFO: Pod "pod-configmaps-c1ac58f0-da89-4fe5-9f89-3afaf70f3587": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013337232s Jan 31 13:33:02.191: INFO: Pod "pod-configmaps-c1ac58f0-da89-4fe5-9f89-3afaf70f3587": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027785591s Jan 31 13:33:04.201: INFO: Pod "pod-configmaps-c1ac58f0-da89-4fe5-9f89-3afaf70f3587": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038274393s Jan 31 13:33:06.217: INFO: Pod "pod-configmaps-c1ac58f0-da89-4fe5-9f89-3afaf70f3587": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05388561s STEP: Saw pod success Jan 31 13:33:06.217: INFO: Pod "pod-configmaps-c1ac58f0-da89-4fe5-9f89-3afaf70f3587" satisfied condition "success or failure" Jan 31 13:33:06.235: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c1ac58f0-da89-4fe5-9f89-3afaf70f3587 container env-test: STEP: delete the pod Jan 31 13:33:06.423: INFO: Waiting for pod pod-configmaps-c1ac58f0-da89-4fe5-9f89-3afaf70f3587 to disappear Jan 31 13:33:06.552: INFO: Pod pod-configmaps-c1ac58f0-da89-4fe5-9f89-3afaf70f3587 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:33:06.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3449" for this suite. Jan 31 13:33:12.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:33:12.726: INFO: namespace configmap-3449 deletion completed in 6.156622546s • [SLOW TEST:14.679 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:33:12.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 31 13:33:12.938: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:33:23.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3755" for this suite. Jan 31 13:34:05.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:34:05.276: INFO: namespace pods-3755 deletion completed in 42.236751296s • [SLOW TEST:52.550 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:34:05.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 31 13:34:05.438: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a26ae726-99be-4fca-af1e-70a6fa03d602" in namespace "projected-2572" to be "success or failure" Jan 31 13:34:05.447: INFO: Pod "downwardapi-volume-a26ae726-99be-4fca-af1e-70a6fa03d602": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105561ms Jan 31 13:34:07.461: INFO: Pod "downwardapi-volume-a26ae726-99be-4fca-af1e-70a6fa03d602": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022783685s Jan 31 13:34:09.470: INFO: Pod "downwardapi-volume-a26ae726-99be-4fca-af1e-70a6fa03d602": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031873057s Jan 31 13:34:11.504: INFO: Pod "downwardapi-volume-a26ae726-99be-4fca-af1e-70a6fa03d602": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065256253s Jan 31 13:34:13.515: INFO: Pod "downwardapi-volume-a26ae726-99be-4fca-af1e-70a6fa03d602": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076323149s STEP: Saw pod success Jan 31 13:34:13.515: INFO: Pod "downwardapi-volume-a26ae726-99be-4fca-af1e-70a6fa03d602" satisfied condition "success or failure" Jan 31 13:34:13.520: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a26ae726-99be-4fca-af1e-70a6fa03d602 container client-container: STEP: delete the pod Jan 31 13:34:13.626: INFO: Waiting for pod downwardapi-volume-a26ae726-99be-4fca-af1e-70a6fa03d602 to disappear Jan 31 13:34:13.636: INFO: Pod downwardapi-volume-a26ae726-99be-4fca-af1e-70a6fa03d602 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:34:13.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2572" for this suite. Jan 31 13:34:19.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:34:19.811: INFO: namespace projected-2572 deletion completed in 6.163163708s • [SLOW TEST:14.535 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:34:19.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a0b2a8cc-ca28-41fe-9b54-7a44a4982488 STEP: Creating a pod to test consume configMaps Jan 31 13:34:19.953: INFO: Waiting up to 5m0s for pod "pod-configmaps-d24bde7c-cc0a-4493-a807-5ffae974d0d4" in namespace "configmap-1057" to be "success or failure" Jan 31 13:34:19.961: INFO: Pod "pod-configmaps-d24bde7c-cc0a-4493-a807-5ffae974d0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.227826ms Jan 31 13:34:22.008: INFO: Pod "pod-configmaps-d24bde7c-cc0a-4493-a807-5ffae974d0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054792123s Jan 31 13:34:24.024: INFO: Pod "pod-configmaps-d24bde7c-cc0a-4493-a807-5ffae974d0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070733628s Jan 31 13:34:26.080: INFO: Pod "pod-configmaps-d24bde7c-cc0a-4493-a807-5ffae974d0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126844594s Jan 31 13:34:28.092: INFO: Pod "pod-configmaps-d24bde7c-cc0a-4493-a807-5ffae974d0d4": Phase="Running", Reason="", readiness=true. Elapsed: 8.139194218s Jan 31 13:34:30.104: INFO: Pod "pod-configmaps-d24bde7c-cc0a-4493-a807-5ffae974d0d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150338374s STEP: Saw pod success Jan 31 13:34:30.104: INFO: Pod "pod-configmaps-d24bde7c-cc0a-4493-a807-5ffae974d0d4" satisfied condition "success or failure" Jan 31 13:34:30.111: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d24bde7c-cc0a-4493-a807-5ffae974d0d4 container configmap-volume-test: STEP: delete the pod Jan 31 13:34:30.233: INFO: Waiting for pod pod-configmaps-d24bde7c-cc0a-4493-a807-5ffae974d0d4 to disappear Jan 31 13:34:30.242: INFO: Pod pod-configmaps-d24bde7c-cc0a-4493-a807-5ffae974d0d4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:34:30.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1057" for this suite. Jan 31 13:34:36.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:34:36.419: INFO: namespace configmap-1057 deletion completed in 6.16702344s • [SLOW TEST:16.607 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:34:36.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 31 13:34:36.496: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 31 13:34:36.554: INFO: Waiting for terminating namespaces to be deleted... Jan 31 13:34:36.560: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 31 13:34:36.575: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 31 13:34:36.575: INFO: Container weave ready: true, restart count 0 Jan 31 13:34:36.575: INFO: Container weave-npc ready: true, restart count 0 Jan 31 13:34:36.575: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 31 13:34:36.575: INFO: Container kube-proxy ready: true, restart count 0 Jan 31 13:34:36.575: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 31 13:34:36.591: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 31 13:34:36.591: INFO: Container kube-apiserver ready: true, restart count 0 Jan 31 13:34:36.591: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 31 13:34:36.591: INFO: Container kube-scheduler ready: true, restart count 13 Jan 31 13:34:36.591: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 31 13:34:36.591: INFO: Container coredns ready: true, restart count 0 Jan 31 13:34:36.591: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 31 13:34:36.591: INFO: Container etcd ready: true, restart count 0 Jan 31 13:34:36.591: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 31 13:34:36.591: INFO: Container weave ready: true, restart count 0 Jan 31 13:34:36.591: INFO: Container weave-npc ready: true, restart count 0 Jan 31 13:34:36.591: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 31 13:34:36.591: INFO: Container coredns ready: true, restart count 0 Jan 31 13:34:36.591: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 31 13:34:36.591: INFO: Container kube-controller-manager ready: true, restart count 19 Jan 31 13:34:36.591: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 31 13:34:36.591: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15eefc12ba264b8d], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:34:37.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5645" for this suite. Jan 31 13:34:43.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:34:43.830: INFO: namespace sched-pred-5645 deletion completed in 6.187154131s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.411 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:34:43.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Jan 31 13:34:43.993: INFO: Waiting up to 5m0s for pod "pod-12e1ff25-14e0-4877-9403-27cc83489927" in namespace "emptydir-2616" to be "success or failure" Jan 31 13:34:44.022: INFO: Pod "pod-12e1ff25-14e0-4877-9403-27cc83489927": Phase="Pending", Reason="", readiness=false. Elapsed: 28.513917ms Jan 31 13:34:46.122: INFO: Pod "pod-12e1ff25-14e0-4877-9403-27cc83489927": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12865023s Jan 31 13:34:48.140: INFO: Pod "pod-12e1ff25-14e0-4877-9403-27cc83489927": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146073367s Jan 31 13:34:50.153: INFO: Pod "pod-12e1ff25-14e0-4877-9403-27cc83489927": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159573316s Jan 31 13:34:52.160: INFO: Pod "pod-12e1ff25-14e0-4877-9403-27cc83489927": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.16645579s STEP: Saw pod success Jan 31 13:34:52.160: INFO: Pod "pod-12e1ff25-14e0-4877-9403-27cc83489927" satisfied condition "success or failure" Jan 31 13:34:52.166: INFO: Trying to get logs from node iruya-node pod pod-12e1ff25-14e0-4877-9403-27cc83489927 container test-container: STEP: delete the pod Jan 31 13:34:52.306: INFO: Waiting for pod pod-12e1ff25-14e0-4877-9403-27cc83489927 to disappear Jan 31 13:34:52.321: INFO: Pod pod-12e1ff25-14e0-4877-9403-27cc83489927 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:34:52.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2616" for this suite. Jan 31 13:34:58.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:34:58.523: INFO: namespace emptydir-2616 deletion completed in 6.194017924s • [SLOW TEST:14.692 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:34:58.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jan 31 13:34:58.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1981' Jan 31 13:35:01.006: INFO: stderr: "" Jan 31 13:35:01.007: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 31 13:35:01.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1981' Jan 31 13:35:01.233: INFO: stderr: "" Jan 31 13:35:01.233: INFO: stdout: "update-demo-nautilus-2l6vf update-demo-nautilus-whtl8 " Jan 31 13:35:01.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6vf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:01.394: INFO: stderr: "" Jan 31 13:35:01.394: INFO: stdout: "" Jan 31 13:35:01.394: INFO: update-demo-nautilus-2l6vf is created but not running Jan 31 13:35:06.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1981' Jan 31 13:35:07.277: INFO: stderr: "" Jan 31 13:35:07.277: INFO: stdout: "update-demo-nautilus-2l6vf update-demo-nautilus-whtl8 " Jan 31 13:35:07.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6vf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:07.795: INFO: stderr: "" Jan 31 13:35:07.795: INFO: stdout: "" Jan 31 13:35:07.795: INFO: update-demo-nautilus-2l6vf is created but not running Jan 31 13:35:12.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1981' Jan 31 13:35:12.996: INFO: stderr: "" Jan 31 13:35:12.996: INFO: stdout: "update-demo-nautilus-2l6vf update-demo-nautilus-whtl8 " Jan 31 13:35:12.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6vf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:13.129: INFO: stderr: "" Jan 31 13:35:13.130: INFO: stdout: "true" Jan 31 13:35:13.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6vf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:13.271: INFO: stderr: "" Jan 31 13:35:13.271: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 13:35:13.271: INFO: validating pod update-demo-nautilus-2l6vf Jan 31 13:35:13.284: INFO: got data: { "image": "nautilus.jpg" } Jan 31 13:35:13.285: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 13:35:13.285: INFO: update-demo-nautilus-2l6vf is verified up and running Jan 31 13:35:13.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-whtl8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:13.375: INFO: stderr: "" Jan 31 13:35:13.376: INFO: stdout: "true" Jan 31 13:35:13.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-whtl8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:13.484: INFO: stderr: "" Jan 31 13:35:13.484: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 13:35:13.485: INFO: validating pod update-demo-nautilus-whtl8 Jan 31 13:35:13.499: INFO: got data: { "image": "nautilus.jpg" } Jan 31 13:35:13.499: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 13:35:13.499: INFO: update-demo-nautilus-whtl8 is verified up and running STEP: scaling down the replication controller Jan 31 13:35:13.502: INFO: scanned /root for discovery docs: Jan 31 13:35:13.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1981' Jan 31 13:35:14.731: INFO: stderr: "" Jan 31 13:35:14.732: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 31 13:35:14.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1981' Jan 31 13:35:14.991: INFO: stderr: "" Jan 31 13:35:14.991: INFO: stdout: "update-demo-nautilus-2l6vf update-demo-nautilus-whtl8 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 31 13:35:19.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1981' Jan 31 13:35:20.154: INFO: stderr: "" Jan 31 13:35:20.154: INFO: stdout: "update-demo-nautilus-2l6vf update-demo-nautilus-whtl8 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 31 13:35:25.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1981' Jan 31 13:35:25.315: INFO: stderr: "" Jan 31 13:35:25.315: INFO: stdout: "update-demo-nautilus-2l6vf update-demo-nautilus-whtl8 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 31 13:35:30.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1981' Jan 31 13:35:30.541: INFO: stderr: "" Jan 31 13:35:30.541: INFO: stdout: "update-demo-nautilus-2l6vf " Jan 31 13:35:30.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6vf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:30.648: INFO: stderr: "" Jan 31 13:35:30.648: INFO: stdout: "true" Jan 31 13:35:30.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6vf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:30.754: INFO: stderr: "" Jan 31 13:35:30.754: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 13:35:30.754: INFO: validating pod update-demo-nautilus-2l6vf Jan 31 13:35:30.760: INFO: got data: { "image": "nautilus.jpg" } Jan 31 13:35:30.760: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 13:35:30.760: INFO: update-demo-nautilus-2l6vf is verified up and running STEP: scaling up the replication controller Jan 31 13:35:30.763: INFO: scanned /root for discovery docs: Jan 31 13:35:30.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1981' Jan 31 13:35:31.990: INFO: stderr: "" Jan 31 13:35:31.991: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 31 13:35:31.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1981' Jan 31 13:35:32.130: INFO: stderr: "" Jan 31 13:35:32.130: INFO: stdout: "update-demo-nautilus-2l6vf update-demo-nautilus-bskf8 " Jan 31 13:35:32.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6vf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:32.307: INFO: stderr: "" Jan 31 13:35:32.308: INFO: stdout: "true" Jan 31 13:35:32.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6vf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:32.472: INFO: stderr: "" Jan 31 13:35:32.472: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 13:35:32.472: INFO: validating pod update-demo-nautilus-2l6vf Jan 31 13:35:32.485: INFO: got data: { "image": "nautilus.jpg" } Jan 31 13:35:32.485: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 13:35:32.485: INFO: update-demo-nautilus-2l6vf is verified up and running Jan 31 13:35:32.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bskf8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:32.739: INFO: stderr: "" Jan 31 13:35:32.740: INFO: stdout: "" Jan 31 13:35:32.740: INFO: update-demo-nautilus-bskf8 is created but not running Jan 31 13:35:37.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1981' Jan 31 13:35:37.948: INFO: stderr: "" Jan 31 13:35:37.948: INFO: stdout: "update-demo-nautilus-2l6vf update-demo-nautilus-bskf8 " Jan 31 13:35:37.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6vf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:38.081: INFO: stderr: "" Jan 31 13:35:38.082: INFO: stdout: "true" Jan 31 13:35:38.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6vf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:38.167: INFO: stderr: "" Jan 31 13:35:38.167: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 13:35:38.168: INFO: validating pod update-demo-nautilus-2l6vf Jan 31 13:35:38.173: INFO: got data: { "image": "nautilus.jpg" } Jan 31 13:35:38.173: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 13:35:38.173: INFO: update-demo-nautilus-2l6vf is verified up and running Jan 31 13:35:38.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bskf8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:38.323: INFO: stderr: "" Jan 31 13:35:38.323: INFO: stdout: "" Jan 31 13:35:38.323: INFO: update-demo-nautilus-bskf8 is created but not running Jan 31 13:35:43.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1981' Jan 31 13:35:43.533: INFO: stderr: "" Jan 31 13:35:43.533: INFO: stdout: "update-demo-nautilus-2l6vf update-demo-nautilus-bskf8 " Jan 31 13:35:43.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6vf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:43.711: INFO: stderr: "" Jan 31 13:35:43.711: INFO: stdout: "true" Jan 31 13:35:43.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l6vf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:43.929: INFO: stderr: "" Jan 31 13:35:43.930: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 13:35:43.930: INFO: validating pod update-demo-nautilus-2l6vf Jan 31 13:35:43.946: INFO: got data: { "image": "nautilus.jpg" } Jan 31 13:35:43.946: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 13:35:43.947: INFO: update-demo-nautilus-2l6vf is verified up and running Jan 31 13:35:43.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bskf8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:44.069: INFO: stderr: "" Jan 31 13:35:44.069: INFO: stdout: "true" Jan 31 13:35:44.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bskf8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1981' Jan 31 13:35:44.170: INFO: stderr: "" Jan 31 13:35:44.171: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 31 13:35:44.171: INFO: validating pod update-demo-nautilus-bskf8 Jan 31 13:35:44.194: INFO: got data: { "image": "nautilus.jpg" } Jan 31 13:35:44.194: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 31 13:35:44.194: INFO: update-demo-nautilus-bskf8 is verified up and running STEP: using delete to clean up resources Jan 31 13:35:44.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1981' Jan 31 13:35:44.329: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 13:35:44.329: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 31 13:35:44.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1981' Jan 31 13:35:44.452: INFO: stderr: "No resources found.\n" Jan 31 13:35:44.452: INFO: stdout: "" Jan 31 13:35:44.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1981 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 31 13:35:44.712: INFO: stderr: "" Jan 31 13:35:44.712: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:35:44.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1981" for this suite. Jan 31 13:36:06.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:36:06.838: INFO: namespace kubectl-1981 deletion completed in 22.114614811s • [SLOW TEST:68.313 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:36:06.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 31 13:36:06.992: INFO: Create a RollingUpdate DaemonSet Jan 31 13:36:06.999: INFO: Check that daemon pods launch on every node of the cluster Jan 31 13:36:07.006: INFO: Number of nodes with available pods: 0 Jan 31 13:36:07.006: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:36:08.021: INFO: Number of nodes with available pods: 0 Jan 31 13:36:08.021: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:36:09.277: INFO: Number of nodes with available pods: 0 Jan 31 13:36:09.277: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:36:10.019: INFO: Number of nodes with available pods: 0 Jan 31 13:36:10.019: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:36:11.036: INFO: Number of nodes with available pods: 0 Jan 31 13:36:11.036: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:36:12.022: INFO: Number of nodes with available pods: 0 Jan 31 13:36:12.022: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:36:13.667: INFO: Number of nodes with available pods: 0 Jan 31 13:36:13.668: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:36:14.345: INFO: Number of nodes with available pods: 0 Jan 31 13:36:14.345: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:36:15.630: INFO: Number of nodes with available pods: 0 Jan 31 13:36:15.630: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:36:16.056: INFO: Number of nodes with available pods: 0 Jan 31 13:36:16.056: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:36:17.023: INFO: Number of nodes with available pods: 1 Jan 31 13:36:17.023: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:36:18.025: INFO: Number of nodes with available pods: 2 Jan 31 13:36:18.025: INFO: Number of running nodes: 2, number of available pods: 2 Jan 31 13:36:18.025: INFO: Update the DaemonSet to trigger a rollout Jan 31 13:36:18.039: INFO: Updating DaemonSet daemon-set Jan 31 13:36:25.495: INFO: Roll back the DaemonSet before rollout is complete Jan 31 13:36:25.510: INFO: Updating DaemonSet daemon-set Jan 31 13:36:25.510: INFO: Make sure DaemonSet rollback is complete Jan 31 13:36:25.838: INFO: Wrong image for pod: daemon-set-rrrhs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 31 13:36:25.838: INFO: Pod daemon-set-rrrhs is not available Jan 31 13:36:26.912: INFO: Wrong image for pod: daemon-set-rrrhs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 31 13:36:26.912: INFO: Pod daemon-set-rrrhs is not available Jan 31 13:36:27.882: INFO: Wrong image for pod: daemon-set-rrrhs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 31 13:36:27.883: INFO: Pod daemon-set-rrrhs is not available Jan 31 13:36:28.904: INFO: Wrong image for pod: daemon-set-rrrhs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 31 13:36:28.904: INFO: Pod daemon-set-rrrhs is not available Jan 31 13:36:29.893: INFO: Pod daemon-set-lv6s5 is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2809, will wait for the garbage collector to delete the pods Jan 31 13:36:29.984: INFO: Deleting DaemonSet.extensions daemon-set took: 8.834369ms Jan 31 13:36:30.285: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.90385ms Jan 31 13:36:46.596: INFO: Number of nodes with available pods: 0 Jan 31 13:36:46.596: INFO: Number of running nodes: 0, number of available pods: 0 Jan 31 13:36:46.601: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2809/daemonsets","resourceVersion":"22566250"},"items":null} Jan 31 13:36:46.605: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2809/pods","resourceVersion":"22566250"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:36:46.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2809" for this suite. Jan 31 13:36:52.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:36:52.764: INFO: namespace daemonsets-2809 deletion completed in 6.142034211s • [SLOW TEST:45.926 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:36:52.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-4eb9e76b-f567-44fb-a1e5-79b562f63084 STEP: Creating a pod to test consume configMaps Jan 31 13:36:52.942: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b224088c-2694-4f31-ab4d-59c7f5e4458d" in namespace "projected-8069" to be "success or failure" Jan 31 13:36:52.950: INFO: Pod "pod-projected-configmaps-b224088c-2694-4f31-ab4d-59c7f5e4458d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.061957ms Jan 31 13:36:54.965: INFO: Pod "pod-projected-configmaps-b224088c-2694-4f31-ab4d-59c7f5e4458d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022181656s Jan 31 13:36:57.010: INFO: Pod "pod-projected-configmaps-b224088c-2694-4f31-ab4d-59c7f5e4458d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067862058s Jan 31 13:36:59.020: INFO: Pod "pod-projected-configmaps-b224088c-2694-4f31-ab4d-59c7f5e4458d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077384388s Jan 31 13:37:01.058: INFO: Pod "pod-projected-configmaps-b224088c-2694-4f31-ab4d-59c7f5e4458d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.11516276s STEP: Saw pod success Jan 31 13:37:01.058: INFO: Pod "pod-projected-configmaps-b224088c-2694-4f31-ab4d-59c7f5e4458d" satisfied condition "success or failure" Jan 31 13:37:01.065: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b224088c-2694-4f31-ab4d-59c7f5e4458d container projected-configmap-volume-test: STEP: delete the pod Jan 31 13:37:01.121: INFO: Waiting for pod pod-projected-configmaps-b224088c-2694-4f31-ab4d-59c7f5e4458d to disappear Jan 31 13:37:01.137: INFO: Pod pod-projected-configmaps-b224088c-2694-4f31-ab4d-59c7f5e4458d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:37:01.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8069" for this suite. Jan 31 13:37:07.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:37:07.372: INFO: namespace projected-8069 deletion completed in 6.178624213s • [SLOW TEST:14.608 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:37:07.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Jan 31 13:37:07.501: INFO: Waiting up to 5m0s for pod "client-containers-8b8d04e3-dcda-452c-a5fa-855de35d1f0c" in namespace "containers-8507" to be "success or failure" Jan 31 13:37:07.512: INFO: Pod "client-containers-8b8d04e3-dcda-452c-a5fa-855de35d1f0c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.86465ms Jan 31 13:37:09.527: INFO: Pod "client-containers-8b8d04e3-dcda-452c-a5fa-855de35d1f0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025427429s Jan 31 13:37:11.538: INFO: Pod "client-containers-8b8d04e3-dcda-452c-a5fa-855de35d1f0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03643805s Jan 31 13:37:13.569: INFO: Pod "client-containers-8b8d04e3-dcda-452c-a5fa-855de35d1f0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068013815s Jan 31 13:37:15.580: INFO: Pod "client-containers-8b8d04e3-dcda-452c-a5fa-855de35d1f0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079079336s STEP: Saw pod success Jan 31 13:37:15.580: INFO: Pod "client-containers-8b8d04e3-dcda-452c-a5fa-855de35d1f0c" satisfied condition "success or failure" Jan 31 13:37:15.584: INFO: Trying to get logs from node iruya-node pod client-containers-8b8d04e3-dcda-452c-a5fa-855de35d1f0c container test-container: STEP: delete the pod Jan 31 13:37:15.650: INFO: Waiting for pod client-containers-8b8d04e3-dcda-452c-a5fa-855de35d1f0c to disappear Jan 31 13:37:15.655: INFO: Pod client-containers-8b8d04e3-dcda-452c-a5fa-855de35d1f0c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:37:15.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8507" for this suite. Jan 31 13:37:23.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:37:23.840: INFO: namespace containers-8507 deletion completed in 8.177474135s • [SLOW TEST:16.468 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:37:23.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-3a30560e-af77-4f0b-a6af-06cc8b59d34b STEP: Creating a pod to test consume configMaps Jan 31 13:37:24.029: INFO: Waiting up to 5m0s for pod "pod-configmaps-101bb012-e77f-4cf8-a217-3a528a4c4193" in namespace "configmap-3273" to be "success or failure" Jan 31 13:37:24.033: INFO: Pod "pod-configmaps-101bb012-e77f-4cf8-a217-3a528a4c4193": Phase="Pending", Reason="", readiness=false. Elapsed: 3.396901ms Jan 31 13:37:26.095: INFO: Pod "pod-configmaps-101bb012-e77f-4cf8-a217-3a528a4c4193": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065578011s Jan 31 13:37:28.102: INFO: Pod "pod-configmaps-101bb012-e77f-4cf8-a217-3a528a4c4193": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072868535s Jan 31 13:37:30.167: INFO: Pod "pod-configmaps-101bb012-e77f-4cf8-a217-3a528a4c4193": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137869437s Jan 31 13:37:32.176: INFO: Pod "pod-configmaps-101bb012-e77f-4cf8-a217-3a528a4c4193": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.146474424s STEP: Saw pod success Jan 31 13:37:32.176: INFO: Pod "pod-configmaps-101bb012-e77f-4cf8-a217-3a528a4c4193" satisfied condition "success or failure" Jan 31 13:37:32.179: INFO: Trying to get logs from node iruya-node pod pod-configmaps-101bb012-e77f-4cf8-a217-3a528a4c4193 container configmap-volume-test: STEP: delete the pod Jan 31 13:37:32.336: INFO: Waiting for pod pod-configmaps-101bb012-e77f-4cf8-a217-3a528a4c4193 to disappear Jan 31 13:37:32.341: INFO: Pod pod-configmaps-101bb012-e77f-4cf8-a217-3a528a4c4193 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:37:32.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3273" for this suite. Jan 31 13:37:38.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:37:38.539: INFO: namespace configmap-3273 deletion completed in 6.190922073s • [SLOW TEST:14.699 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:37:38.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 31 13:37:47.237: INFO: Successfully updated pod "pod-update-activedeadlineseconds-4eff64e1-b8c6-40cf-8b6e-e3774cbc4f14" Jan 31 13:37:47.237: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-4eff64e1-b8c6-40cf-8b6e-e3774cbc4f14" in namespace "pods-9973" to be "terminated due to deadline exceeded" Jan 31 13:37:47.248: INFO: Pod "pod-update-activedeadlineseconds-4eff64e1-b8c6-40cf-8b6e-e3774cbc4f14": Phase="Running", Reason="", readiness=true. Elapsed: 10.588561ms Jan 31 13:37:49.259: INFO: Pod "pod-update-activedeadlineseconds-4eff64e1-b8c6-40cf-8b6e-e3774cbc4f14": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.021435915s Jan 31 13:37:49.259: INFO: Pod "pod-update-activedeadlineseconds-4eff64e1-b8c6-40cf-8b6e-e3774cbc4f14" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:37:49.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9973" for this suite. Jan 31 13:37:55.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:37:55.403: INFO: namespace pods-9973 deletion completed in 6.130477881s • [SLOW TEST:16.862 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:37:55.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jan 31 13:37:55.498: INFO: namespace kubectl-3623 Jan 31 13:37:55.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3623' Jan 31 13:37:55.888: INFO: stderr: "" Jan 31 13:37:55.888: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 31 13:37:56.900: INFO: Selector matched 1 pods for map[app:redis] Jan 31 13:37:56.900: INFO: Found 0 / 1 Jan 31 13:37:57.905: INFO: Selector matched 1 pods for map[app:redis] Jan 31 13:37:57.905: INFO: Found 0 / 1 Jan 31 13:37:58.924: INFO: Selector matched 1 pods for map[app:redis] Jan 31 13:37:58.924: INFO: Found 0 / 1 Jan 31 13:37:59.899: INFO: Selector matched 1 pods for map[app:redis] Jan 31 13:37:59.900: INFO: Found 0 / 1 Jan 31 13:38:00.906: INFO: Selector matched 1 pods for map[app:redis] Jan 31 13:38:00.907: INFO: Found 0 / 1 Jan 31 13:38:01.900: INFO: Selector matched 1 pods for map[app:redis] Jan 31 13:38:01.900: INFO: Found 0 / 1 Jan 31 13:38:02.899: INFO: Selector matched 1 pods for map[app:redis] Jan 31 13:38:02.899: INFO: Found 1 / 1 Jan 31 13:38:02.899: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 31 13:38:02.903: INFO: Selector matched 1 pods for map[app:redis] Jan 31 13:38:02.903: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 31 13:38:02.903: INFO: wait on redis-master startup in kubectl-3623 Jan 31 13:38:02.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wctwb redis-master --namespace=kubectl-3623' Jan 31 13:38:03.193: INFO: stderr: "" Jan 31 13:38:03.193: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 31 Jan 13:38:01.609 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 31 Jan 13:38:01.609 # Server started, Redis version 3.2.12\n1:M 31 Jan 13:38:01.609 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 31 Jan 13:38:01.610 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jan 31 13:38:03.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3623' Jan 31 13:38:03.382: INFO: stderr: "" Jan 31 13:38:03.382: INFO: stdout: "service/rm2 exposed\n" Jan 31 13:38:03.421: INFO: Service rm2 in namespace kubectl-3623 found. STEP: exposing service Jan 31 13:38:05.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3623' Jan 31 13:38:05.679: INFO: stderr: "" Jan 31 13:38:05.679: INFO: stdout: "service/rm3 exposed\n" Jan 31 13:38:05.689: INFO: Service rm3 in namespace kubectl-3623 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:38:07.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3623" for this suite. Jan 31 13:38:37.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:38:37.929: INFO: namespace kubectl-3623 deletion completed in 30.22065673s • [SLOW TEST:42.526 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:38:37.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-56854a3c-e2f3-4a6b-b0ee-4ae1e7ae35d1 STEP: Creating a pod to test consume configMaps Jan 31 13:38:38.053: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ac8b3b2-1a02-48ba-a25a-2f5805bb11e7" in namespace "configmap-6213" to be "success or failure" Jan 31 13:38:38.068: INFO: Pod "pod-configmaps-3ac8b3b2-1a02-48ba-a25a-2f5805bb11e7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.438332ms Jan 31 13:38:40.077: INFO: Pod "pod-configmaps-3ac8b3b2-1a02-48ba-a25a-2f5805bb11e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024238669s Jan 31 13:38:42.086: INFO: Pod "pod-configmaps-3ac8b3b2-1a02-48ba-a25a-2f5805bb11e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032754566s Jan 31 13:38:44.108: INFO: Pod "pod-configmaps-3ac8b3b2-1a02-48ba-a25a-2f5805bb11e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054650895s Jan 31 13:38:46.117: INFO: Pod "pod-configmaps-3ac8b3b2-1a02-48ba-a25a-2f5805bb11e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063669462s STEP: Saw pod success Jan 31 13:38:46.117: INFO: Pod "pod-configmaps-3ac8b3b2-1a02-48ba-a25a-2f5805bb11e7" satisfied condition "success or failure" Jan 31 13:38:46.121: INFO: Trying to get logs from node iruya-node pod pod-configmaps-3ac8b3b2-1a02-48ba-a25a-2f5805bb11e7 container configmap-volume-test: STEP: delete the pod Jan 31 13:38:46.177: INFO: Waiting for pod pod-configmaps-3ac8b3b2-1a02-48ba-a25a-2f5805bb11e7 to disappear Jan 31 13:38:46.196: INFO: Pod pod-configmaps-3ac8b3b2-1a02-48ba-a25a-2f5805bb11e7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:38:46.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6213" for this suite. Jan 31 13:38:52.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:38:52.397: INFO: namespace configmap-6213 deletion completed in 6.195925942s • [SLOW TEST:14.467 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:38:52.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-854acd7f-2bd8-49f4-8962-745e9b1614d5 STEP: Creating a pod to test consume configMaps Jan 31 13:38:52.542: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2ff4dd69-f3b0-45a7-b115-34b0be3b464e" in namespace "projected-7945" to be "success or failure" Jan 31 13:38:52.577: INFO: Pod "pod-projected-configmaps-2ff4dd69-f3b0-45a7-b115-34b0be3b464e": Phase="Pending", Reason="", readiness=false. Elapsed: 34.399467ms Jan 31 13:38:54.587: INFO: Pod "pod-projected-configmaps-2ff4dd69-f3b0-45a7-b115-34b0be3b464e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045147038s Jan 31 13:38:56.602: INFO: Pod "pod-projected-configmaps-2ff4dd69-f3b0-45a7-b115-34b0be3b464e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05990975s Jan 31 13:38:58.623: INFO: Pod "pod-projected-configmaps-2ff4dd69-f3b0-45a7-b115-34b0be3b464e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08106984s Jan 31 13:39:00.634: INFO: Pod "pod-projected-configmaps-2ff4dd69-f3b0-45a7-b115-34b0be3b464e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091324558s Jan 31 13:39:02.648: INFO: Pod "pod-projected-configmaps-2ff4dd69-f3b0-45a7-b115-34b0be3b464e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105478665s STEP: Saw pod success Jan 31 13:39:02.648: INFO: Pod "pod-projected-configmaps-2ff4dd69-f3b0-45a7-b115-34b0be3b464e" satisfied condition "success or failure" Jan 31 13:39:02.653: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2ff4dd69-f3b0-45a7-b115-34b0be3b464e container projected-configmap-volume-test: STEP: delete the pod Jan 31 13:39:02.721: INFO: Waiting for pod pod-projected-configmaps-2ff4dd69-f3b0-45a7-b115-34b0be3b464e to disappear Jan 31 13:39:02.729: INFO: Pod pod-projected-configmaps-2ff4dd69-f3b0-45a7-b115-34b0be3b464e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:39:02.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7945" for this suite. Jan 31 13:39:08.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:39:08.866: INFO: namespace projected-7945 deletion completed in 6.131885471s • [SLOW TEST:16.469 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:39:08.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-099d8a43-5c56-4855-bbf2-a7fdc5fde4b0 STEP: Creating a pod to test consume configMaps Jan 31 13:39:08.948: INFO: Waiting up to 5m0s for pod "pod-configmaps-353f5021-d842-4c83-b3a2-c28ae7802365" in namespace "configmap-1432" to be "success or failure" Jan 31 13:39:09.001: INFO: Pod "pod-configmaps-353f5021-d842-4c83-b3a2-c28ae7802365": Phase="Pending", Reason="", readiness=false. Elapsed: 52.670461ms Jan 31 13:39:11.015: INFO: Pod "pod-configmaps-353f5021-d842-4c83-b3a2-c28ae7802365": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066811018s Jan 31 13:39:13.023: INFO: Pod "pod-configmaps-353f5021-d842-4c83-b3a2-c28ae7802365": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07462502s Jan 31 13:39:15.033: INFO: Pod "pod-configmaps-353f5021-d842-4c83-b3a2-c28ae7802365": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084697543s Jan 31 13:39:17.054: INFO: Pod "pod-configmaps-353f5021-d842-4c83-b3a2-c28ae7802365": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106085614s STEP: Saw pod success Jan 31 13:39:17.054: INFO: Pod "pod-configmaps-353f5021-d842-4c83-b3a2-c28ae7802365" satisfied condition "success or failure" Jan 31 13:39:17.059: INFO: Trying to get logs from node iruya-node pod pod-configmaps-353f5021-d842-4c83-b3a2-c28ae7802365 container configmap-volume-test: STEP: delete the pod Jan 31 13:39:17.162: INFO: Waiting for pod pod-configmaps-353f5021-d842-4c83-b3a2-c28ae7802365 to disappear Jan 31 13:39:17.169: INFO: Pod pod-configmaps-353f5021-d842-4c83-b3a2-c28ae7802365 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:39:17.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1432" for this suite. Jan 31 13:39:23.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:39:23.339: INFO: namespace configmap-1432 deletion completed in 6.16386853s • [SLOW TEST:14.472 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:39:23.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-a0048d64-86e8-4092-82c2-5dd300751e51 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:39:33.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8426" for this suite. Jan 31 13:39:55.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:39:55.985: INFO: namespace configmap-8426 deletion completed in 22.162329698s • [SLOW TEST:32.645 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:39:55.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jan 31 13:39:56.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3097' Jan 31 13:39:56.382: INFO: stderr: "" Jan 31 13:39:56.383: INFO: stdout: "pod/pause created\n" Jan 31 13:39:56.383: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 31 13:39:56.383: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3097" to be "running and ready" Jan 31 13:39:56.393: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.580774ms Jan 31 13:39:58.402: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018338342s Jan 31 13:40:00.415: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031387555s Jan 31 13:40:02.422: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038405245s Jan 31 13:40:04.431: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.047288642s Jan 31 13:40:04.431: INFO: Pod "pause" satisfied condition "running and ready" Jan 31 13:40:04.431: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jan 31 13:40:04.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3097' Jan 31 13:40:04.715: INFO: stderr: "" Jan 31 13:40:04.715: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 31 13:40:04.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3097' Jan 31 13:40:04.881: INFO: stderr: "" Jan 31 13:40:04.881: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 31 13:40:04.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3097' Jan 31 13:40:05.018: INFO: stderr: "" Jan 31 13:40:05.018: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 31 13:40:05.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3097' Jan 31 13:40:05.161: INFO: stderr: "" Jan 31 13:40:05.161: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jan 31 13:40:05.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3097' Jan 31 13:40:05.336: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 31 13:40:05.336: INFO: stdout: "pod \"pause\" force deleted\n" Jan 31 13:40:05.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3097' Jan 31 13:40:05.455: INFO: stderr: "No resources found.\n" Jan 31 13:40:05.456: INFO: stdout: "" Jan 31 13:40:05.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3097 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 31 13:40:05.567: INFO: stderr: "" Jan 31 13:40:05.567: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:40:05.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3097" for this suite. Jan 31 13:40:11.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:40:11.716: INFO: namespace kubectl-3097 deletion completed in 6.140503095s • [SLOW TEST:15.730 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:40:11.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:40:20.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4895" for this suite. Jan 31 13:41:02.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:41:02.246: INFO: namespace kubelet-test-4895 deletion completed in 42.209539997s • [SLOW TEST:50.530 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:41:02.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 31 13:41:18.607: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 31 13:41:18.625: INFO: Pod pod-with-poststart-http-hook still exists Jan 31 13:41:20.626: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 31 13:41:20.644: INFO: Pod pod-with-poststart-http-hook still exists Jan 31 13:41:22.626: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 31 13:41:22.648: INFO: Pod pod-with-poststart-http-hook still exists Jan 31 13:41:24.626: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 31 13:41:24.636: INFO: Pod pod-with-poststart-http-hook still exists Jan 31 13:41:26.626: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 31 13:41:26.685: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:41:26.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7488" for this suite. Jan 31 13:41:48.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:41:48.960: INFO: namespace container-lifecycle-hook-7488 deletion completed in 22.26644311s • [SLOW TEST:46.711 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:41:48.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 31 13:41:49.174: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06e14092-444c-42f3-b03d-75393a1b8763" in namespace "downward-api-7311" to be "success or failure" Jan 31 13:41:49.188: INFO: Pod "downwardapi-volume-06e14092-444c-42f3-b03d-75393a1b8763": Phase="Pending", Reason="", readiness=false. Elapsed: 13.41509ms Jan 31 13:41:51.196: INFO: Pod "downwardapi-volume-06e14092-444c-42f3-b03d-75393a1b8763": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02185792s Jan 31 13:41:53.211: INFO: Pod "downwardapi-volume-06e14092-444c-42f3-b03d-75393a1b8763": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036086876s Jan 31 13:41:55.224: INFO: Pod "downwardapi-volume-06e14092-444c-42f3-b03d-75393a1b8763": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04891554s Jan 31 13:41:57.245: INFO: Pod "downwardapi-volume-06e14092-444c-42f3-b03d-75393a1b8763": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07049836s STEP: Saw pod success Jan 31 13:41:57.246: INFO: Pod "downwardapi-volume-06e14092-444c-42f3-b03d-75393a1b8763" satisfied condition "success or failure" Jan 31 13:41:57.253: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-06e14092-444c-42f3-b03d-75393a1b8763 container client-container: STEP: delete the pod Jan 31 13:41:57.361: INFO: Waiting for pod downwardapi-volume-06e14092-444c-42f3-b03d-75393a1b8763 to disappear Jan 31 13:41:57.404: INFO: Pod downwardapi-volume-06e14092-444c-42f3-b03d-75393a1b8763 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:41:57.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7311" for this suite. Jan 31 13:42:03.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:42:03.564: INFO: namespace downward-api-7311 deletion completed in 6.152013363s • [SLOW TEST:14.603 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:42:03.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 31 13:42:12.921: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:42:13.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6832" for this suite. Jan 31 13:45:26.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:45:26.192: INFO: namespace replicaset-6832 deletion completed in 3m12.201118525s • [SLOW TEST:202.628 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:45:26.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 31 13:45:35.261: INFO: Successfully updated pod "labelsupdate30a13694-ae8f-4695-bafe-ce72d4d13fe2" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:45:37.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2385" for this suite. Jan 31 13:45:59.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:45:59.554: INFO: namespace downward-api-2385 deletion completed in 22.133252065s • [SLOW TEST:33.362 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:45:59.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 31 13:46:15.833: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:15.934: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:17.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:17.945: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:19.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:19.954: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:21.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:21.959: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:23.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:23.948: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:25.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:25.952: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:27.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:27.944: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:29.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:29.944: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:31.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:31.948: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:33.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:33.947: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:35.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:35.968: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:37.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:37.942: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:39.936: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:39.948: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:41.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:41.948: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:43.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:43.945: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:45.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:45.949: INFO: Pod pod-with-prestop-exec-hook still exists Jan 31 13:46:47.935: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 31 13:46:47.946: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:46:47.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-263" for this suite. Jan 31 13:47:10.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:47:10.122: INFO: namespace container-lifecycle-hook-263 deletion completed in 22.143906879s • [SLOW TEST:70.568 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:47:10.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1933.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1933.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1933.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1933.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 31 13:47:22.345: INFO: File wheezy_udp@dns-test-service-3.dns-1933.svc.cluster.local from pod dns-1933/dns-test-39a26eab-f4a8-4077-9da4-df36d8840c2a contains '' instead of 'foo.example.com.' Jan 31 13:47:22.351: INFO: File jessie_udp@dns-test-service-3.dns-1933.svc.cluster.local from pod dns-1933/dns-test-39a26eab-f4a8-4077-9da4-df36d8840c2a contains '' instead of 'foo.example.com.' Jan 31 13:47:22.352: INFO: Lookups using dns-1933/dns-test-39a26eab-f4a8-4077-9da4-df36d8840c2a failed for: [wheezy_udp@dns-test-service-3.dns-1933.svc.cluster.local jessie_udp@dns-test-service-3.dns-1933.svc.cluster.local] Jan 31 13:47:27.394: INFO: DNS probes using dns-test-39a26eab-f4a8-4077-9da4-df36d8840c2a succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1933.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1933.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1933.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1933.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 31 13:47:41.611: INFO: File wheezy_udp@dns-test-service-3.dns-1933.svc.cluster.local from pod dns-1933/dns-test-5dff2750-50f4-4a6f-877f-7e48f123ffc2 contains '' instead of 'bar.example.com.' Jan 31 13:47:41.622: INFO: File jessie_udp@dns-test-service-3.dns-1933.svc.cluster.local from pod dns-1933/dns-test-5dff2750-50f4-4a6f-877f-7e48f123ffc2 contains '' instead of 'bar.example.com.' Jan 31 13:47:41.622: INFO: Lookups using dns-1933/dns-test-5dff2750-50f4-4a6f-877f-7e48f123ffc2 failed for: [wheezy_udp@dns-test-service-3.dns-1933.svc.cluster.local jessie_udp@dns-test-service-3.dns-1933.svc.cluster.local] Jan 31 13:47:46.638: INFO: File wheezy_udp@dns-test-service-3.dns-1933.svc.cluster.local from pod dns-1933/dns-test-5dff2750-50f4-4a6f-877f-7e48f123ffc2 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 31 13:47:46.645: INFO: File jessie_udp@dns-test-service-3.dns-1933.svc.cluster.local from pod dns-1933/dns-test-5dff2750-50f4-4a6f-877f-7e48f123ffc2 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 31 13:47:46.645: INFO: Lookups using dns-1933/dns-test-5dff2750-50f4-4a6f-877f-7e48f123ffc2 failed for: [wheezy_udp@dns-test-service-3.dns-1933.svc.cluster.local jessie_udp@dns-test-service-3.dns-1933.svc.cluster.local] Jan 31 13:47:51.639: INFO: File wheezy_udp@dns-test-service-3.dns-1933.svc.cluster.local from pod dns-1933/dns-test-5dff2750-50f4-4a6f-877f-7e48f123ffc2 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 31 13:47:51.646: INFO: File jessie_udp@dns-test-service-3.dns-1933.svc.cluster.local from pod dns-1933/dns-test-5dff2750-50f4-4a6f-877f-7e48f123ffc2 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 31 13:47:51.646: INFO: Lookups using dns-1933/dns-test-5dff2750-50f4-4a6f-877f-7e48f123ffc2 failed for: [wheezy_udp@dns-test-service-3.dns-1933.svc.cluster.local jessie_udp@dns-test-service-3.dns-1933.svc.cluster.local] Jan 31 13:47:56.656: INFO: File wheezy_udp@dns-test-service-3.dns-1933.svc.cluster.local from pod dns-1933/dns-test-5dff2750-50f4-4a6f-877f-7e48f123ffc2 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 31 13:47:56.666: INFO: File jessie_udp@dns-test-service-3.dns-1933.svc.cluster.local from pod dns-1933/dns-test-5dff2750-50f4-4a6f-877f-7e48f123ffc2 contains '' instead of 'bar.example.com.' Jan 31 13:47:56.666: INFO: Lookups using dns-1933/dns-test-5dff2750-50f4-4a6f-877f-7e48f123ffc2 failed for: [wheezy_udp@dns-test-service-3.dns-1933.svc.cluster.local jessie_udp@dns-test-service-3.dns-1933.svc.cluster.local] Jan 31 13:48:01.650: INFO: DNS probes using dns-test-5dff2750-50f4-4a6f-877f-7e48f123ffc2 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1933.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1933.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1933.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1933.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 31 13:48:16.196: INFO: File wheezy_udp@dns-test-service-3.dns-1933.svc.cluster.local from pod dns-1933/dns-test-e691f01e-1aa0-4315-8a53-5689b4281976 contains '' instead of '10.110.89.3' Jan 31 13:48:16.217: INFO: File jessie_udp@dns-test-service-3.dns-1933.svc.cluster.local from pod dns-1933/dns-test-e691f01e-1aa0-4315-8a53-5689b4281976 contains '' instead of '10.110.89.3' Jan 31 13:48:16.217: INFO: Lookups using dns-1933/dns-test-e691f01e-1aa0-4315-8a53-5689b4281976 failed for: [wheezy_udp@dns-test-service-3.dns-1933.svc.cluster.local jessie_udp@dns-test-service-3.dns-1933.svc.cluster.local] Jan 31 13:48:21.242: INFO: DNS probes using dns-test-e691f01e-1aa0-4315-8a53-5689b4281976 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:48:21.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1933" for this suite. Jan 31 13:48:29.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:48:29.747: INFO: namespace dns-1933 deletion completed in 8.165266801s • [SLOW TEST:79.624 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:48:29.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-6d0aeb10-3752-4c07-9c96-f67975a308dc STEP: Creating configMap with name cm-test-opt-upd-015b59f9-66fd-4fa9-999e-30d3abf0108e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6d0aeb10-3752-4c07-9c96-f67975a308dc STEP: Updating configmap cm-test-opt-upd-015b59f9-66fd-4fa9-999e-30d3abf0108e STEP: Creating configMap with name cm-test-opt-create-52c3115b-02b7-4579-980b-1aa043b3daed STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:50:12.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6476" for this suite. Jan 31 13:50:34.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:50:34.361: INFO: namespace configmap-6476 deletion completed in 22.17306851s • [SLOW TEST:124.613 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:50:34.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 31 13:50:34.438: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 31 13:50:37.922: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:50:37.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9997" for this suite. Jan 31 13:50:50.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:50:50.304: INFO: namespace replication-controller-9997 deletion completed in 12.221282462s • [SLOW TEST:15.940 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:50:50.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 31 13:50:59.195: INFO: Successfully updated pod "annotationupdate3d59241d-ed24-460b-8b53-4e7e1b13cf50" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:51:01.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1389" for this suite. Jan 31 13:51:23.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:51:23.526: INFO: namespace downward-api-1389 deletion completed in 22.260983284s • [SLOW TEST:33.222 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:51:23.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 31 13:51:23.751: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef476577-b947-4fcb-86e5-27964ba20697" in namespace "downward-api-3527" to be "success or failure" Jan 31 13:51:23.770: INFO: Pod "downwardapi-volume-ef476577-b947-4fcb-86e5-27964ba20697": Phase="Pending", Reason="", readiness=false. Elapsed: 18.92456ms Jan 31 13:51:25.788: INFO: Pod "downwardapi-volume-ef476577-b947-4fcb-86e5-27964ba20697": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037004637s Jan 31 13:51:27.801: INFO: Pod "downwardapi-volume-ef476577-b947-4fcb-86e5-27964ba20697": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0491272s Jan 31 13:51:29.814: INFO: Pod "downwardapi-volume-ef476577-b947-4fcb-86e5-27964ba20697": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062214497s Jan 31 13:51:31.838: INFO: Pod "downwardapi-volume-ef476577-b947-4fcb-86e5-27964ba20697": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.086101281s STEP: Saw pod success Jan 31 13:51:31.838: INFO: Pod "downwardapi-volume-ef476577-b947-4fcb-86e5-27964ba20697" satisfied condition "success or failure" Jan 31 13:51:31.849: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ef476577-b947-4fcb-86e5-27964ba20697 container client-container: STEP: delete the pod Jan 31 13:51:32.112: INFO: Waiting for pod downwardapi-volume-ef476577-b947-4fcb-86e5-27964ba20697 to disappear Jan 31 13:51:32.122: INFO: Pod downwardapi-volume-ef476577-b947-4fcb-86e5-27964ba20697 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:51:32.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3527" for this suite. Jan 31 13:51:38.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:51:38.385: INFO: namespace downward-api-3527 deletion completed in 6.244830698s • [SLOW TEST:14.858 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:51:38.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 31 13:51:38.455: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Jan 31 13:51:39.283: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 31 13:51:41.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 13:51:43.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 13:51:45.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 13:51:47.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 13:51:49.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716075499, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 31 13:51:54.619: INFO: Waited 3.079826237s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:51:55.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7121" for this suite. Jan 31 13:52:01.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:52:01.607: INFO: namespace aggregator-7121 deletion completed in 6.241881069s • [SLOW TEST:23.221 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:52:01.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-2kg8 STEP: Creating a pod to test atomic-volume-subpath Jan 31 13:52:01.789: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2kg8" in namespace "subpath-6575" to be "success or failure" Jan 31 13:52:01.803: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.347154ms Jan 31 13:52:03.817: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026134675s Jan 31 13:52:05.831: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040549501s Jan 31 13:52:07.841: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05049061s Jan 31 13:52:09.858: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Running", Reason="", readiness=true. Elapsed: 8.067408949s Jan 31 13:52:11.878: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Running", Reason="", readiness=true. Elapsed: 10.087425186s Jan 31 13:52:13.902: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Running", Reason="", readiness=true. Elapsed: 12.111817708s Jan 31 13:52:15.924: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Running", Reason="", readiness=true. Elapsed: 14.133628675s Jan 31 13:52:17.935: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Running", Reason="", readiness=true. Elapsed: 16.144313826s Jan 31 13:52:19.949: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Running", Reason="", readiness=true. Elapsed: 18.158260695s Jan 31 13:52:21.960: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Running", Reason="", readiness=true. Elapsed: 20.169420612s Jan 31 13:52:23.969: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Running", Reason="", readiness=true. Elapsed: 22.178531464s Jan 31 13:52:25.982: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Running", Reason="", readiness=true. Elapsed: 24.191340496s Jan 31 13:52:27.992: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Running", Reason="", readiness=true. Elapsed: 26.201699907s Jan 31 13:52:30.006: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Running", Reason="", readiness=true. Elapsed: 28.215425537s Jan 31 13:52:32.029: INFO: Pod "pod-subpath-test-configmap-2kg8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.238337829s STEP: Saw pod success Jan 31 13:52:32.029: INFO: Pod "pod-subpath-test-configmap-2kg8" satisfied condition "success or failure" Jan 31 13:52:32.039: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-2kg8 container test-container-subpath-configmap-2kg8: STEP: delete the pod Jan 31 13:52:32.195: INFO: Waiting for pod pod-subpath-test-configmap-2kg8 to disappear Jan 31 13:52:32.209: INFO: Pod pod-subpath-test-configmap-2kg8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-2kg8 Jan 31 13:52:32.210: INFO: Deleting pod "pod-subpath-test-configmap-2kg8" in namespace "subpath-6575" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:52:32.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6575" for this suite. Jan 31 13:52:38.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:52:38.560: INFO: namespace subpath-6575 deletion completed in 6.300463068s • [SLOW TEST:36.952 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:52:38.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 31 13:52:38.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-3607' Jan 31 13:52:40.398: INFO: stderr: "" Jan 31 13:52:40.398: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jan 31 13:52:50.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-3607 -o json' Jan 31 13:52:50.612: INFO: stderr: "" Jan 31 13:52:50.613: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-31T13:52:40Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-3607\",\n \"resourceVersion\": \"22568415\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3607/pods/e2e-test-nginx-pod\",\n \"uid\": \"90445a3a-bb94-4276-a6e3-b462cb553d3e\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-thxrb\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-thxrb\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-thxrb\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-31T13:52:40Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-31T13:52:47Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-31T13:52:47Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-31T13:52:40Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://39550e1fb46945a8d280ee1c120494dd2eeb5aaa48556e6d5124b253ed84106a\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-31T13:52:46Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-31T13:52:40Z\"\n }\n}\n" STEP: replace the image in the pod Jan 31 13:52:50.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3607' Jan 31 13:52:51.096: INFO: stderr: "" Jan 31 13:52:51.096: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Jan 31 13:52:51.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3607' Jan 31 13:52:59.265: INFO: stderr: "" Jan 31 13:52:59.266: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:52:59.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3607" for this suite. Jan 31 13:53:05.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:53:05.459: INFO: namespace kubectl-3607 deletion completed in 6.168611423s • [SLOW TEST:26.894 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:53:05.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 31 13:53:13.714: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:53:13.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2824" for this suite. Jan 31 13:53:19.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:53:20.064: INFO: namespace container-runtime-2824 deletion completed in 6.254574725s • [SLOW TEST:14.604 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:53:20.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-3423 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3423 to expose endpoints map[] Jan 31 13:53:20.347: INFO: Get endpoints failed (97.832875ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 31 13:53:21.364: INFO: successfully validated that service endpoint-test2 in namespace services-3423 exposes endpoints map[] (1.114262242s elapsed) STEP: Creating pod pod1 in namespace services-3423 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3423 to expose endpoints map[pod1:[80]] Jan 31 13:53:25.595: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.201876529s elapsed, will retry) Jan 31 13:53:29.653: INFO: successfully validated that service endpoint-test2 in namespace services-3423 exposes endpoints map[pod1:[80]] (8.259331754s elapsed) STEP: Creating pod pod2 in namespace services-3423 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3423 to expose endpoints map[pod1:[80] pod2:[80]] Jan 31 13:53:34.150: INFO: Unexpected endpoints: found map[ebb487f8-3541-47b7-93ea-8f4233592ae9:[80]], expected map[pod1:[80] pod2:[80]] (4.477022257s elapsed, will retry) Jan 31 13:53:36.237: INFO: successfully validated that service endpoint-test2 in namespace services-3423 exposes endpoints map[pod1:[80] pod2:[80]] (6.564281162s elapsed) STEP: Deleting pod pod1 in namespace services-3423 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3423 to expose endpoints map[pod2:[80]] Jan 31 13:53:36.351: INFO: successfully validated that service endpoint-test2 in namespace services-3423 exposes endpoints map[pod2:[80]] (89.368506ms elapsed) STEP: Deleting pod pod2 in namespace services-3423 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3423 to expose endpoints map[] Jan 31 13:53:36.423: INFO: successfully validated that service endpoint-test2 in namespace services-3423 exposes endpoints map[] (47.255245ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:53:36.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3423" for this suite. Jan 31 13:53:58.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:53:58.667: INFO: namespace services-3423 deletion completed in 22.162539837s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:38.601 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:53:58.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 31 13:53:59.498: INFO: Pod name wrapped-volume-race-ebd0a560-dfa3-4404-a232-0a973608cd47: Found 0 pods out of 5 Jan 31 13:54:04.529: INFO: Pod name wrapped-volume-race-ebd0a560-dfa3-4404-a232-0a973608cd47: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ebd0a560-dfa3-4404-a232-0a973608cd47 in namespace emptydir-wrapper-7506, will wait for the garbage collector to delete the pods Jan 31 13:54:30.675: INFO: Deleting ReplicationController wrapped-volume-race-ebd0a560-dfa3-4404-a232-0a973608cd47 took: 33.388661ms Jan 31 13:54:31.176: INFO: Terminating ReplicationController wrapped-volume-race-ebd0a560-dfa3-4404-a232-0a973608cd47 pods took: 501.38913ms STEP: Creating RC which spawns configmap-volume pods Jan 31 13:55:16.848: INFO: Pod name wrapped-volume-race-9360d819-2674-4806-bd55-700a7b2c9b85: Found 0 pods out of 5 Jan 31 13:55:21.882: INFO: Pod name wrapped-volume-race-9360d819-2674-4806-bd55-700a7b2c9b85: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9360d819-2674-4806-bd55-700a7b2c9b85 in namespace emptydir-wrapper-7506, will wait for the garbage collector to delete the pods Jan 31 13:55:54.002: INFO: Deleting ReplicationController wrapped-volume-race-9360d819-2674-4806-bd55-700a7b2c9b85 took: 26.139346ms Jan 31 13:55:54.304: INFO: Terminating ReplicationController wrapped-volume-race-9360d819-2674-4806-bd55-700a7b2c9b85 pods took: 301.187687ms STEP: Creating RC which spawns configmap-volume pods Jan 31 13:56:47.106: INFO: Pod name wrapped-volume-race-31a950c4-c9c4-4735-8ddc-a16bc92fe7be: Found 0 pods out of 5 Jan 31 13:56:52.131: INFO: Pod name wrapped-volume-race-31a950c4-c9c4-4735-8ddc-a16bc92fe7be: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-31a950c4-c9c4-4735-8ddc-a16bc92fe7be in namespace emptydir-wrapper-7506, will wait for the garbage collector to delete the pods Jan 31 13:57:24.438: INFO: Deleting ReplicationController wrapped-volume-race-31a950c4-c9c4-4735-8ddc-a16bc92fe7be took: 67.733339ms Jan 31 13:57:24.839: INFO: Terminating ReplicationController wrapped-volume-race-31a950c4-c9c4-4735-8ddc-a16bc92fe7be pods took: 401.452237ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:58:09.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7506" for this suite. Jan 31 13:58:19.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:58:19.387: INFO: namespace emptydir-wrapper-7506 deletion completed in 10.148408409s • [SLOW TEST:260.720 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:58:19.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-030ba450-bce1-4c81-80d9-68536cc9d9f4 Jan 31 13:58:19.467: INFO: Pod name my-hostname-basic-030ba450-bce1-4c81-80d9-68536cc9d9f4: Found 0 pods out of 1 Jan 31 13:58:24.480: INFO: Pod name my-hostname-basic-030ba450-bce1-4c81-80d9-68536cc9d9f4: Found 1 pods out of 1 Jan 31 13:58:24.481: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-030ba450-bce1-4c81-80d9-68536cc9d9f4" are running Jan 31 13:58:32.499: INFO: Pod "my-hostname-basic-030ba450-bce1-4c81-80d9-68536cc9d9f4-8857s" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 13:58:19 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 13:58:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-030ba450-bce1-4c81-80d9-68536cc9d9f4]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 13:58:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-030ba450-bce1-4c81-80d9-68536cc9d9f4]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 13:58:19 +0000 UTC Reason: Message:}]) Jan 31 13:58:32.500: INFO: Trying to dial the pod Jan 31 13:58:37.537: INFO: Controller my-hostname-basic-030ba450-bce1-4c81-80d9-68536cc9d9f4: Got expected result from replica 1 [my-hostname-basic-030ba450-bce1-4c81-80d9-68536cc9d9f4-8857s]: "my-hostname-basic-030ba450-bce1-4c81-80d9-68536cc9d9f4-8857s", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:58:37.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6505" for this suite. Jan 31 13:58:43.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:58:43.721: INFO: namespace replication-controller-6505 deletion completed in 6.17518187s • [SLOW TEST:24.333 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:58:43.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 31 13:58:44.150: INFO: Number of nodes with available pods: 0 Jan 31 13:58:44.150: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:58:45.933: INFO: Number of nodes with available pods: 0 Jan 31 13:58:45.933: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:58:46.397: INFO: Number of nodes with available pods: 0 Jan 31 13:58:46.397: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:58:47.488: INFO: Number of nodes with available pods: 0 Jan 31 13:58:47.488: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:58:48.183: INFO: Number of nodes with available pods: 0 Jan 31 13:58:48.183: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:58:49.171: INFO: Number of nodes with available pods: 0 Jan 31 13:58:49.171: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:58:50.952: INFO: Number of nodes with available pods: 0 Jan 31 13:58:50.952: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:58:51.431: INFO: Number of nodes with available pods: 0 Jan 31 13:58:51.432: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:58:52.253: INFO: Number of nodes with available pods: 0 Jan 31 13:58:52.253: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:58:53.159: INFO: Number of nodes with available pods: 0 Jan 31 13:58:53.159: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:58:54.167: INFO: Number of nodes with available pods: 1 Jan 31 13:58:54.167: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:58:55.240: INFO: Number of nodes with available pods: 1 Jan 31 13:58:55.241: INFO: Node iruya-node is running more than one daemon pod Jan 31 13:58:56.179: INFO: Number of nodes with available pods: 2 Jan 31 13:58:56.179: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 31 13:58:56.264: INFO: Number of nodes with available pods: 1 Jan 31 13:58:56.264: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:58:57.504: INFO: Number of nodes with available pods: 1 Jan 31 13:58:57.504: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:58:58.283: INFO: Number of nodes with available pods: 1 Jan 31 13:58:58.283: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:58:59.287: INFO: Number of nodes with available pods: 1 Jan 31 13:58:59.287: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:00.281: INFO: Number of nodes with available pods: 1 Jan 31 13:59:00.281: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:01.293: INFO: Number of nodes with available pods: 1 Jan 31 13:59:01.293: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:02.275: INFO: Number of nodes with available pods: 1 Jan 31 13:59:02.275: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:03.287: INFO: Number of nodes with available pods: 1 Jan 31 13:59:03.287: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:04.286: INFO: Number of nodes with available pods: 1 Jan 31 13:59:04.287: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:05.294: INFO: Number of nodes with available pods: 1 Jan 31 13:59:05.294: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:06.298: INFO: Number of nodes with available pods: 1 Jan 31 13:59:06.299: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:07.287: INFO: Number of nodes with available pods: 1 Jan 31 13:59:07.287: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:08.280: INFO: Number of nodes with available pods: 1 Jan 31 13:59:08.280: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:09.283: INFO: Number of nodes with available pods: 1 Jan 31 13:59:09.283: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:10.287: INFO: Number of nodes with available pods: 1 Jan 31 13:59:10.287: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:11.280: INFO: Number of nodes with available pods: 1 Jan 31 13:59:11.280: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:12.638: INFO: Number of nodes with available pods: 1 Jan 31 13:59:12.639: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:13.294: INFO: Number of nodes with available pods: 1 Jan 31 13:59:13.294: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:14.279: INFO: Number of nodes with available pods: 1 Jan 31 13:59:14.279: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 31 13:59:15.283: INFO: Number of nodes with available pods: 2 Jan 31 13:59:15.283: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3415, will wait for the garbage collector to delete the pods Jan 31 13:59:15.360: INFO: Deleting DaemonSet.extensions daemon-set took: 16.112627ms Jan 31 13:59:15.661: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.023688ms Jan 31 13:59:22.471: INFO: Number of nodes with available pods: 0 Jan 31 13:59:22.472: INFO: Number of running nodes: 0, number of available pods: 0 Jan 31 13:59:22.476: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3415/daemonsets","resourceVersion":"22569970"},"items":null} Jan 31 13:59:22.479: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3415/pods","resourceVersion":"22569970"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:59:22.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3415" for this suite. Jan 31 13:59:28.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:59:28.655: INFO: namespace daemonsets-3415 deletion completed in 6.160102306s • [SLOW TEST:44.933 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:59:28.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 31 13:59:28.844: INFO: Waiting up to 5m0s for pod "pod-cd9c2567-d31a-4c33-ba1e-515c5f2d428d" in namespace "emptydir-1537" to be "success or failure" Jan 31 13:59:28.862: INFO: Pod "pod-cd9c2567-d31a-4c33-ba1e-515c5f2d428d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.732628ms Jan 31 13:59:30.884: INFO: Pod "pod-cd9c2567-d31a-4c33-ba1e-515c5f2d428d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040568563s Jan 31 13:59:32.912: INFO: Pod "pod-cd9c2567-d31a-4c33-ba1e-515c5f2d428d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068525228s Jan 31 13:59:34.924: INFO: Pod "pod-cd9c2567-d31a-4c33-ba1e-515c5f2d428d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080301132s Jan 31 13:59:36.934: INFO: Pod "pod-cd9c2567-d31a-4c33-ba1e-515c5f2d428d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090262646s STEP: Saw pod success Jan 31 13:59:36.934: INFO: Pod "pod-cd9c2567-d31a-4c33-ba1e-515c5f2d428d" satisfied condition "success or failure" Jan 31 13:59:36.940: INFO: Trying to get logs from node iruya-node pod pod-cd9c2567-d31a-4c33-ba1e-515c5f2d428d container test-container: STEP: delete the pod Jan 31 13:59:37.001: INFO: Waiting for pod pod-cd9c2567-d31a-4c33-ba1e-515c5f2d428d to disappear Jan 31 13:59:37.087: INFO: Pod pod-cd9c2567-d31a-4c33-ba1e-515c5f2d428d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 31 13:59:37.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1537" for this suite. Jan 31 13:59:43.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 31 13:59:43.248: INFO: namespace emptydir-1537 deletion completed in 6.15101463s • [SLOW TEST:14.592 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 31 13:59:43.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 31 13:59:43.363: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 22.399931ms)
Jan 31 13:59:43.379: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.169221ms)
Jan 31 13:59:43.387: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.469558ms)
Jan 31 13:59:43.395: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.814367ms)
Jan 31 13:59:43.405: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.469113ms)
Jan 31 13:59:43.416: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.691118ms)
Jan 31 13:59:43.425: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.600034ms)
Jan 31 13:59:43.441: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.656811ms)
Jan 31 13:59:43.451: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.024223ms)
Jan 31 13:59:43.456: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.148607ms)
Jan 31 13:59:43.466: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.171266ms)
Jan 31 13:59:43.477: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.471669ms)
Jan 31 13:59:43.491: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.791399ms)
Jan 31 13:59:43.496: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.018587ms)
Jan 31 13:59:43.501: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.919917ms)
Jan 31 13:59:43.506: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.509312ms)
Jan 31 13:59:43.510: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.164212ms)
Jan 31 13:59:43.515: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.563892ms)
Jan 31 13:59:43.521: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.25772ms)
Jan 31 13:59:43.532: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.869362ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 13:59:43.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9482" for this suite.
Jan 31 13:59:49.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 13:59:49.701: INFO: namespace proxy-9482 deletion completed in 6.163365986s

• [SLOW TEST:6.453 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 13:59:49.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-9849
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 13:59:49.794: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 31 14:00:28.109: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-9849 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 14:00:28.110: INFO: >>> kubeConfig: /root/.kube/config
I0131 14:00:28.202090       9 log.go:172] (0xc0027a4630) (0xc0026b7680) Create stream
I0131 14:00:28.202221       9 log.go:172] (0xc0027a4630) (0xc0026b7680) Stream added, broadcasting: 1
I0131 14:00:28.210709       9 log.go:172] (0xc0027a4630) Reply frame received for 1
I0131 14:00:28.210851       9 log.go:172] (0xc0027a4630) (0xc001a10000) Create stream
I0131 14:00:28.210867       9 log.go:172] (0xc0027a4630) (0xc001a10000) Stream added, broadcasting: 3
I0131 14:00:28.213127       9 log.go:172] (0xc0027a4630) Reply frame received for 3
I0131 14:00:28.213172       9 log.go:172] (0xc0027a4630) (0xc0026b7720) Create stream
I0131 14:00:28.213188       9 log.go:172] (0xc0027a4630) (0xc0026b7720) Stream added, broadcasting: 5
I0131 14:00:28.215661       9 log.go:172] (0xc0027a4630) Reply frame received for 5
I0131 14:00:28.372766       9 log.go:172] (0xc0027a4630) Data frame received for 3
I0131 14:00:28.372877       9 log.go:172] (0xc001a10000) (3) Data frame handling
I0131 14:00:28.372895       9 log.go:172] (0xc001a10000) (3) Data frame sent
I0131 14:00:28.599425       9 log.go:172] (0xc0027a4630) Data frame received for 1
I0131 14:00:28.599656       9 log.go:172] (0xc0027a4630) (0xc0026b7720) Stream removed, broadcasting: 5
I0131 14:00:28.599739       9 log.go:172] (0xc0026b7680) (1) Data frame handling
I0131 14:00:28.599779       9 log.go:172] (0xc0026b7680) (1) Data frame sent
I0131 14:00:28.599883       9 log.go:172] (0xc0027a4630) (0xc001a10000) Stream removed, broadcasting: 3
I0131 14:00:28.599938       9 log.go:172] (0xc0027a4630) (0xc0026b7680) Stream removed, broadcasting: 1
I0131 14:00:28.600028       9 log.go:172] (0xc0027a4630) Go away received
I0131 14:00:28.600804       9 log.go:172] (0xc0027a4630) (0xc0026b7680) Stream removed, broadcasting: 1
I0131 14:00:28.600818       9 log.go:172] (0xc0027a4630) (0xc001a10000) Stream removed, broadcasting: 3
I0131 14:00:28.600827       9 log.go:172] (0xc0027a4630) (0xc0026b7720) Stream removed, broadcasting: 5
Jan 31 14:00:28.601: INFO: Waiting for endpoints: map[]
Jan 31 14:00:28.612: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-9849 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 14:00:28.612: INFO: >>> kubeConfig: /root/.kube/config
I0131 14:00:28.686160       9 log.go:172] (0xc0025bc630) (0xc00294a1e0) Create stream
I0131 14:00:28.686345       9 log.go:172] (0xc0025bc630) (0xc00294a1e0) Stream added, broadcasting: 1
I0131 14:00:28.699516       9 log.go:172] (0xc0025bc630) Reply frame received for 1
I0131 14:00:28.699570       9 log.go:172] (0xc0025bc630) (0xc0026b7d60) Create stream
I0131 14:00:28.699583       9 log.go:172] (0xc0025bc630) (0xc0026b7d60) Stream added, broadcasting: 3
I0131 14:00:28.701285       9 log.go:172] (0xc0025bc630) Reply frame received for 3
I0131 14:00:28.701305       9 log.go:172] (0xc0025bc630) (0xc001a100a0) Create stream
I0131 14:00:28.701313       9 log.go:172] (0xc0025bc630) (0xc001a100a0) Stream added, broadcasting: 5
I0131 14:00:28.702368       9 log.go:172] (0xc0025bc630) Reply frame received for 5
I0131 14:00:28.806399       9 log.go:172] (0xc0025bc630) Data frame received for 3
I0131 14:00:28.806503       9 log.go:172] (0xc0026b7d60) (3) Data frame handling
I0131 14:00:28.806568       9 log.go:172] (0xc0026b7d60) (3) Data frame sent
I0131 14:00:28.962589       9 log.go:172] (0xc0025bc630) (0xc0026b7d60) Stream removed, broadcasting: 3
I0131 14:00:28.963097       9 log.go:172] (0xc0025bc630) Data frame received for 1
I0131 14:00:28.963286       9 log.go:172] (0xc0025bc630) (0xc001a100a0) Stream removed, broadcasting: 5
I0131 14:00:28.963369       9 log.go:172] (0xc00294a1e0) (1) Data frame handling
I0131 14:00:28.963409       9 log.go:172] (0xc00294a1e0) (1) Data frame sent
I0131 14:00:28.963439       9 log.go:172] (0xc0025bc630) (0xc00294a1e0) Stream removed, broadcasting: 1
I0131 14:00:28.963460       9 log.go:172] (0xc0025bc630) Go away received
I0131 14:00:28.964205       9 log.go:172] (0xc0025bc630) (0xc00294a1e0) Stream removed, broadcasting: 1
I0131 14:00:28.964243       9 log.go:172] (0xc0025bc630) (0xc0026b7d60) Stream removed, broadcasting: 3
I0131 14:00:28.964259       9 log.go:172] (0xc0025bc630) (0xc001a100a0) Stream removed, broadcasting: 5
Jan 31 14:00:28.964: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:00:28.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9849" for this suite.
Jan 31 14:00:53.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:00:53.676: INFO: namespace pod-network-test-9849 deletion completed in 24.700482945s

• [SLOW TEST:63.975 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:00:53.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 31 14:00:53.755: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b7ddb60-c372-4c87-9a5e-9c36ca2f6947" in namespace "downward-api-4979" to be "success or failure"
Jan 31 14:00:53.760: INFO: Pod "downwardapi-volume-6b7ddb60-c372-4c87-9a5e-9c36ca2f6947": Phase="Pending", Reason="", readiness=false. Elapsed: 4.60603ms
Jan 31 14:00:55.769: INFO: Pod "downwardapi-volume-6b7ddb60-c372-4c87-9a5e-9c36ca2f6947": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013670099s
Jan 31 14:00:57.785: INFO: Pod "downwardapi-volume-6b7ddb60-c372-4c87-9a5e-9c36ca2f6947": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029699831s
Jan 31 14:00:59.794: INFO: Pod "downwardapi-volume-6b7ddb60-c372-4c87-9a5e-9c36ca2f6947": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03849654s
Jan 31 14:01:01.821: INFO: Pod "downwardapi-volume-6b7ddb60-c372-4c87-9a5e-9c36ca2f6947": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066048517s
Jan 31 14:01:03.845: INFO: Pod "downwardapi-volume-6b7ddb60-c372-4c87-9a5e-9c36ca2f6947": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089544202s
STEP: Saw pod success
Jan 31 14:01:03.845: INFO: Pod "downwardapi-volume-6b7ddb60-c372-4c87-9a5e-9c36ca2f6947" satisfied condition "success or failure"
Jan 31 14:01:03.856: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6b7ddb60-c372-4c87-9a5e-9c36ca2f6947 container client-container: 
STEP: delete the pod
Jan 31 14:01:03.986: INFO: Waiting for pod downwardapi-volume-6b7ddb60-c372-4c87-9a5e-9c36ca2f6947 to disappear
Jan 31 14:01:03.992: INFO: Pod downwardapi-volume-6b7ddb60-c372-4c87-9a5e-9c36ca2f6947 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:01:03.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4979" for this suite.
Jan 31 14:01:10.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:01:10.158: INFO: namespace downward-api-4979 deletion completed in 6.159283624s

• [SLOW TEST:16.482 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:01:10.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:01:18.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-346" for this suite.
Jan 31 14:02:10.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:02:10.569: INFO: namespace kubelet-test-346 deletion completed in 52.176045119s

• [SLOW TEST:60.411 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:02:10.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 31 14:02:10.711: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:02:24.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5307" for this suite.
Jan 31 14:02:30.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:02:30.902: INFO: namespace init-container-5307 deletion completed in 6.172472665s

• [SLOW TEST:20.332 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:02:30.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-3869
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-3869
STEP: Deleting pre-stop pod
Jan 31 14:02:52.175: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:02:52.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-3869" for this suite.
Jan 31 14:03:38.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:03:38.509: INFO: namespace prestop-3869 deletion completed in 46.288941772s

• [SLOW TEST:67.607 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:03:38.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 31 14:03:38.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 31 14:03:38.859: INFO: stderr: ""
Jan 31 14:03:38.860: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:03:38.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8853" for this suite.
Jan 31 14:03:44.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:03:45.027: INFO: namespace kubectl-8853 deletion completed in 6.143006277s

• [SLOW TEST:6.517 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:03:45.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-e5805700-7443-46ed-970a-bed04977e188
STEP: Creating a pod to test consume secrets
Jan 31 14:03:45.250: INFO: Waiting up to 5m0s for pod "pod-secrets-451f7115-bfb0-43ae-8a61-b0f9d283ea6c" in namespace "secrets-6802" to be "success or failure"
Jan 31 14:03:45.278: INFO: Pod "pod-secrets-451f7115-bfb0-43ae-8a61-b0f9d283ea6c": Phase="Pending", Reason="", readiness=false. Elapsed: 27.40972ms
Jan 31 14:03:47.288: INFO: Pod "pod-secrets-451f7115-bfb0-43ae-8a61-b0f9d283ea6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037556627s
Jan 31 14:03:49.296: INFO: Pod "pod-secrets-451f7115-bfb0-43ae-8a61-b0f9d283ea6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045519124s
Jan 31 14:03:51.304: INFO: Pod "pod-secrets-451f7115-bfb0-43ae-8a61-b0f9d283ea6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053553984s
Jan 31 14:03:53.314: INFO: Pod "pod-secrets-451f7115-bfb0-43ae-8a61-b0f9d283ea6c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063698373s
Jan 31 14:03:55.400: INFO: Pod "pod-secrets-451f7115-bfb0-43ae-8a61-b0f9d283ea6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.149945288s
STEP: Saw pod success
Jan 31 14:03:55.401: INFO: Pod "pod-secrets-451f7115-bfb0-43ae-8a61-b0f9d283ea6c" satisfied condition "success or failure"
Jan 31 14:03:55.410: INFO: Trying to get logs from node iruya-node pod pod-secrets-451f7115-bfb0-43ae-8a61-b0f9d283ea6c container secret-volume-test: 
STEP: delete the pod
Jan 31 14:03:55.645: INFO: Waiting for pod pod-secrets-451f7115-bfb0-43ae-8a61-b0f9d283ea6c to disappear
Jan 31 14:03:55.654: INFO: Pod pod-secrets-451f7115-bfb0-43ae-8a61-b0f9d283ea6c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:03:55.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6802" for this suite.
Jan 31 14:04:01.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:04:01.896: INFO: namespace secrets-6802 deletion completed in 6.231627561s

• [SLOW TEST:16.869 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:04:01.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9668
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan 31 14:04:02.112: INFO: Found 0 stateful pods, waiting for 3
Jan 31 14:04:12.126: INFO: Found 2 stateful pods, waiting for 3
Jan 31 14:04:22.142: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 14:04:22.143: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 14:04:22.143: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 31 14:04:32.123: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 14:04:32.123: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 14:04:32.123: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 14:04:32.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9668 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 14:04:34.556: INFO: stderr: "I0131 14:04:34.189423    1213 log.go:172] (0xc000a204d0) (0xc000586960) Create stream\nI0131 14:04:34.189545    1213 log.go:172] (0xc000a204d0) (0xc000586960) Stream added, broadcasting: 1\nI0131 14:04:34.196736    1213 log.go:172] (0xc000a204d0) Reply frame received for 1\nI0131 14:04:34.196806    1213 log.go:172] (0xc000a204d0) (0xc00077a0a0) Create stream\nI0131 14:04:34.196849    1213 log.go:172] (0xc000a204d0) (0xc00077a0a0) Stream added, broadcasting: 3\nI0131 14:04:34.198274    1213 log.go:172] (0xc000a204d0) Reply frame received for 3\nI0131 14:04:34.198292    1213 log.go:172] (0xc000a204d0) (0xc000586a00) Create stream\nI0131 14:04:34.198298    1213 log.go:172] (0xc000a204d0) (0xc000586a00) Stream added, broadcasting: 5\nI0131 14:04:34.199504    1213 log.go:172] (0xc000a204d0) Reply frame received for 5\nI0131 14:04:34.347643    1213 log.go:172] (0xc000a204d0) Data frame received for 5\nI0131 14:04:34.347743    1213 log.go:172] (0xc000586a00) (5) Data frame handling\nI0131 14:04:34.347782    1213 log.go:172] (0xc000586a00) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0131 14:04:34.423141    1213 log.go:172] (0xc000a204d0) Data frame received for 3\nI0131 14:04:34.423203    1213 log.go:172] (0xc00077a0a0) (3) Data frame handling\nI0131 14:04:34.423249    1213 log.go:172] (0xc00077a0a0) (3) Data frame sent\nI0131 14:04:34.538297    1213 log.go:172] (0xc000a204d0) (0xc00077a0a0) Stream removed, broadcasting: 3\nI0131 14:04:34.538798    1213 log.go:172] (0xc000a204d0) Data frame received for 1\nI0131 14:04:34.538944    1213 log.go:172] (0xc000a204d0) (0xc000586a00) Stream removed, broadcasting: 5\nI0131 14:04:34.539032    1213 log.go:172] (0xc000586960) (1) Data frame handling\nI0131 14:04:34.539068    1213 log.go:172] (0xc000586960) (1) Data frame sent\nI0131 14:04:34.539086    1213 log.go:172] (0xc000a204d0) (0xc000586960) Stream removed, broadcasting: 1\nI0131 14:04:34.539131    1213 log.go:172] (0xc000a204d0) Go away received\nI0131 14:04:34.540742    1213 log.go:172] (0xc000a204d0) (0xc000586960) Stream removed, broadcasting: 1\nI0131 14:04:34.540879    1213 log.go:172] (0xc000a204d0) (0xc00077a0a0) Stream removed, broadcasting: 3\nI0131 14:04:34.540914    1213 log.go:172] (0xc000a204d0) (0xc000586a00) Stream removed, broadcasting: 5\n"
Jan 31 14:04:34.557: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 14:04:34.558: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 31 14:04:44.627: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 31 14:04:54.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9668 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:04:55.142: INFO: stderr: "I0131 14:04:54.906633    1248 log.go:172] (0xc000ac04d0) (0xc000418820) Create stream\nI0131 14:04:54.906838    1248 log.go:172] (0xc000ac04d0) (0xc000418820) Stream added, broadcasting: 1\nI0131 14:04:54.921152    1248 log.go:172] (0xc000ac04d0) Reply frame received for 1\nI0131 14:04:54.921243    1248 log.go:172] (0xc000ac04d0) (0xc000418000) Create stream\nI0131 14:04:54.921255    1248 log.go:172] (0xc000ac04d0) (0xc000418000) Stream added, broadcasting: 3\nI0131 14:04:54.922967    1248 log.go:172] (0xc000ac04d0) Reply frame received for 3\nI0131 14:04:54.922996    1248 log.go:172] (0xc000ac04d0) (0xc000670140) Create stream\nI0131 14:04:54.923007    1248 log.go:172] (0xc000ac04d0) (0xc000670140) Stream added, broadcasting: 5\nI0131 14:04:54.924302    1248 log.go:172] (0xc000ac04d0) Reply frame received for 5\nI0131 14:04:55.034968    1248 log.go:172] (0xc000ac04d0) Data frame received for 5\nI0131 14:04:55.035034    1248 log.go:172] (0xc000670140) (5) Data frame handling\nI0131 14:04:55.035060    1248 log.go:172] (0xc000670140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0131 14:04:55.035083    1248 log.go:172] (0xc000ac04d0) Data frame received for 3\nI0131 14:04:55.035092    1248 log.go:172] (0xc000418000) (3) Data frame handling\nI0131 14:04:55.035106    1248 log.go:172] (0xc000418000) (3) Data frame sent\nI0131 14:04:55.130661    1248 log.go:172] (0xc000ac04d0) Data frame received for 1\nI0131 14:04:55.130761    1248 log.go:172] (0xc000ac04d0) (0xc000670140) Stream removed, broadcasting: 5\nI0131 14:04:55.130900    1248 log.go:172] (0xc000ac04d0) (0xc000418000) Stream removed, broadcasting: 3\nI0131 14:04:55.130999    1248 log.go:172] (0xc000418820) (1) Data frame handling\nI0131 14:04:55.131028    1248 log.go:172] (0xc000418820) (1) Data frame sent\nI0131 14:04:55.131045    1248 log.go:172] (0xc000ac04d0) (0xc000418820) Stream removed, broadcasting: 1\nI0131 14:04:55.131066    1248 log.go:172] (0xc000ac04d0) Go away received\nI0131 14:04:55.132326    1248 log.go:172] (0xc000ac04d0) (0xc000418820) Stream removed, broadcasting: 1\nI0131 14:04:55.132349    1248 log.go:172] (0xc000ac04d0) (0xc000418000) Stream removed, broadcasting: 3\nI0131 14:04:55.132366    1248 log.go:172] (0xc000ac04d0) (0xc000670140) Stream removed, broadcasting: 5\n"
Jan 31 14:04:55.142: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 14:04:55.142: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 14:05:05.230: INFO: Waiting for StatefulSet statefulset-9668/ss2 to complete update
Jan 31 14:05:05.230: INFO: Waiting for Pod statefulset-9668/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 14:05:05.230: INFO: Waiting for Pod statefulset-9668/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 14:05:15.243: INFO: Waiting for StatefulSet statefulset-9668/ss2 to complete update
Jan 31 14:05:15.244: INFO: Waiting for Pod statefulset-9668/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 14:05:15.244: INFO: Waiting for Pod statefulset-9668/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 14:05:25.244: INFO: Waiting for StatefulSet statefulset-9668/ss2 to complete update
Jan 31 14:05:25.244: INFO: Waiting for Pod statefulset-9668/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 31 14:05:35.245: INFO: Waiting for StatefulSet statefulset-9668/ss2 to complete update
Jan 31 14:05:35.245: INFO: Waiting for Pod statefulset-9668/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Jan 31 14:05:45.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9668 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 14:05:45.980: INFO: stderr: "I0131 14:05:45.514477    1269 log.go:172] (0xc00013adc0) (0xc00082e640) Create stream\nI0131 14:05:45.515471    1269 log.go:172] (0xc00013adc0) (0xc00082e640) Stream added, broadcasting: 1\nI0131 14:05:45.522116    1269 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0131 14:05:45.522433    1269 log.go:172] (0xc00013adc0) (0xc000640280) Create stream\nI0131 14:05:45.522465    1269 log.go:172] (0xc00013adc0) (0xc000640280) Stream added, broadcasting: 3\nI0131 14:05:45.525255    1269 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0131 14:05:45.525325    1269 log.go:172] (0xc00013adc0) (0xc00082e6e0) Create stream\nI0131 14:05:45.525353    1269 log.go:172] (0xc00013adc0) (0xc00082e6e0) Stream added, broadcasting: 5\nI0131 14:05:45.529034    1269 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0131 14:05:45.707166    1269 log.go:172] (0xc00013adc0) Data frame received for 5\nI0131 14:05:45.707256    1269 log.go:172] (0xc00082e6e0) (5) Data frame handling\nI0131 14:05:45.707288    1269 log.go:172] (0xc00082e6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0131 14:05:45.835811    1269 log.go:172] (0xc00013adc0) Data frame received for 3\nI0131 14:05:45.836053    1269 log.go:172] (0xc000640280) (3) Data frame handling\nI0131 14:05:45.836113    1269 log.go:172] (0xc000640280) (3) Data frame sent\nI0131 14:05:45.958305    1269 log.go:172] (0xc00013adc0) Data frame received for 1\nI0131 14:05:45.959068    1269 log.go:172] (0xc00013adc0) (0xc000640280) Stream removed, broadcasting: 3\nI0131 14:05:45.959165    1269 log.go:172] (0xc00082e640) (1) Data frame handling\nI0131 14:05:45.959219    1269 log.go:172] (0xc00082e640) (1) Data frame sent\nI0131 14:05:45.959280    1269 log.go:172] (0xc00013adc0) (0xc00082e640) Stream removed, broadcasting: 1\nI0131 14:05:45.959379    1269 log.go:172] (0xc00013adc0) (0xc00082e6e0) Stream removed, broadcasting: 5\nI0131 14:05:45.959633    1269 log.go:172] (0xc00013adc0) Go away received\nI0131 14:05:45.960847    1269 log.go:172] (0xc00013adc0) (0xc00082e640) Stream removed, broadcasting: 1\nI0131 14:05:45.960940    1269 log.go:172] (0xc00013adc0) (0xc000640280) Stream removed, broadcasting: 3\nI0131 14:05:45.960995    1269 log.go:172] (0xc00013adc0) (0xc00082e6e0) Stream removed, broadcasting: 5\n"
Jan 31 14:05:45.981: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 14:05:45.981: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 14:05:46.155: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 31 14:05:56.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9668 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:05:56.691: INFO: stderr: "I0131 14:05:56.489304    1288 log.go:172] (0xc000944580) (0xc000936820) Create stream\nI0131 14:05:56.489581    1288 log.go:172] (0xc000944580) (0xc000936820) Stream added, broadcasting: 1\nI0131 14:05:56.502574    1288 log.go:172] (0xc000944580) Reply frame received for 1\nI0131 14:05:56.502706    1288 log.go:172] (0xc000944580) (0xc00055b540) Create stream\nI0131 14:05:56.502718    1288 log.go:172] (0xc000944580) (0xc00055b540) Stream added, broadcasting: 3\nI0131 14:05:56.507139    1288 log.go:172] (0xc000944580) Reply frame received for 3\nI0131 14:05:56.507296    1288 log.go:172] (0xc000944580) (0xc000936000) Create stream\nI0131 14:05:56.507314    1288 log.go:172] (0xc000944580) (0xc000936000) Stream added, broadcasting: 5\nI0131 14:05:56.509567    1288 log.go:172] (0xc000944580) Reply frame received for 5\nI0131 14:05:56.605383    1288 log.go:172] (0xc000944580) Data frame received for 3\nI0131 14:05:56.605563    1288 log.go:172] (0xc00055b540) (3) Data frame handling\nI0131 14:05:56.605650    1288 log.go:172] (0xc00055b540) (3) Data frame sent\nI0131 14:05:56.605675    1288 log.go:172] (0xc000944580) Data frame received for 5\nI0131 14:05:56.605689    1288 log.go:172] (0xc000936000) (5) Data frame handling\nI0131 14:05:56.605706    1288 log.go:172] (0xc000936000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0131 14:05:56.681988    1288 log.go:172] (0xc000944580) Data frame received for 1\nI0131 14:05:56.682041    1288 log.go:172] (0xc000936820) (1) Data frame handling\nI0131 14:05:56.682053    1288 log.go:172] (0xc000936820) (1) Data frame sent\nI0131 14:05:56.682064    1288 log.go:172] (0xc000944580) (0xc000936820) Stream removed, broadcasting: 1\nI0131 14:05:56.682628    1288 log.go:172] (0xc000944580) (0xc00055b540) Stream removed, broadcasting: 3\nI0131 14:05:56.682728    1288 log.go:172] (0xc000944580) (0xc000936000) Stream removed, broadcasting: 5\nI0131 14:05:56.682811    1288 log.go:172] (0xc000944580) Go away received\nI0131 14:05:56.683479    1288 log.go:172] (0xc000944580) (0xc000936820) Stream removed, broadcasting: 1\nI0131 14:05:56.683527    1288 log.go:172] (0xc000944580) (0xc00055b540) Stream removed, broadcasting: 3\nI0131 14:05:56.683535    1288 log.go:172] (0xc000944580) (0xc000936000) Stream removed, broadcasting: 5\n"
Jan 31 14:05:56.691: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 14:05:56.691: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 14:06:06.799: INFO: Waiting for StatefulSet statefulset-9668/ss2 to complete update
Jan 31 14:06:06.799: INFO: Waiting for Pod statefulset-9668/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 31 14:06:06.799: INFO: Waiting for Pod statefulset-9668/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 31 14:06:16.819: INFO: Waiting for StatefulSet statefulset-9668/ss2 to complete update
Jan 31 14:06:16.820: INFO: Waiting for Pod statefulset-9668/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 31 14:06:16.820: INFO: Waiting for Pod statefulset-9668/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 31 14:06:26.829: INFO: Waiting for StatefulSet statefulset-9668/ss2 to complete update
Jan 31 14:06:26.829: INFO: Waiting for Pod statefulset-9668/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 31 14:06:36.811: INFO: Waiting for StatefulSet statefulset-9668/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 31 14:06:46.818: INFO: Deleting all statefulset in ns statefulset-9668
Jan 31 14:06:46.824: INFO: Scaling statefulset ss2 to 0
Jan 31 14:07:16.951: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 14:07:16.956: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:07:16.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9668" for this suite.
Jan 31 14:07:25.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:07:25.185: INFO: namespace statefulset-9668 deletion completed in 8.186015412s

• [SLOW TEST:203.288 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:07:25.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:08:25.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3336" for this suite.
Jan 31 14:08:47.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:08:47.517: INFO: namespace container-probe-3336 deletion completed in 22.211301219s

• [SLOW TEST:82.333 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:08:47.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7874
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-7874
STEP: Creating statefulset with conflicting port in namespace statefulset-7874
STEP: Waiting until pod test-pod will start running in namespace statefulset-7874
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7874
Jan 31 14:08:58.053: INFO: Observed stateful pod in namespace: statefulset-7874, name: ss-0, uid: 50d5004d-5943-4766-a675-22ebfe43e18b, status phase: Pending. Waiting for statefulset controller to delete.
Jan 31 14:13:58.053: INFO: Pod ss-0 expected to be re-created at least once
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 31 14:13:58.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-7874'
Jan 31 14:13:58.246: INFO: stderr: ""
Jan 31 14:13:58.246: INFO: stdout: "Name:           ss-0\nNamespace:      statefulset-7874\nPriority:       0\nNode:           iruya-node/\nLabels:         baz=blah\n                controller-revision-hash=ss-6f98bdb9c4\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss-0\nAnnotations:    \nStatus:         Pending\nIP:             \nControlled By:  StatefulSet/ss\nContainers:\n  nginx:\n    Image:        docker.io/library/nginx:1.14-alpine\n    Port:         21017/TCP\n    Host Port:    21017/TCP\n    Environment:  \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zgzbs (ro)\nVolumes:\n  default-token-zgzbs:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-zgzbs\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason            Age   From                 Message\n  ----     ------            ----  ----                 -------\n  Warning  PodFitsHostPorts  5m8s  kubelet, iruya-node  Predicate PodFitsHostPorts failed\n"
Jan 31 14:13:58.246: INFO: 
Output of kubectl describe ss-0:
Name:           ss-0
Namespace:      statefulset-7874
Priority:       0
Node:           iruya-node/
Labels:         baz=blah
                controller-revision-hash=ss-6f98bdb9c4
                foo=bar
                statefulset.kubernetes.io/pod-name=ss-0
Annotations:    
Status:         Pending
IP:             
Controlled By:  StatefulSet/ss
Containers:
  nginx:
    Image:        docker.io/library/nginx:1.14-alpine
    Port:         21017/TCP
    Host Port:    21017/TCP
    Environment:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zgzbs (ro)
Volumes:
  default-token-zgzbs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-zgzbs
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age   From                 Message
  ----     ------            ----  ----                 -------
  Warning  PodFitsHostPorts  5m8s  kubelet, iruya-node  Predicate PodFitsHostPorts failed

Jan 31 14:13:58.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-7874 --tail=100'
Jan 31 14:13:58.476: INFO: rc: 1
Jan 31 14:13:58.477: INFO: 
Last 100 log lines of ss-0:

Jan 31 14:13:58.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-7874'
Jan 31 14:13:58.669: INFO: stderr: ""
Jan 31 14:13:58.670: INFO: stdout: "Name:         test-pod\nNamespace:    statefulset-7874\nPriority:     0\nNode:         iruya-node/10.96.3.65\nStart Time:   Fri, 31 Jan 2020 14:08:48 +0000\nLabels:       \nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nContainers:\n  nginx:\n    Container ID:   docker://9e6ee46ead9b97f4a81cbe7e1eaa236558284f10aae24378eef15fbb417aaab3\n    Image:          docker.io/library/nginx:1.14-alpine\n    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n    Port:           21017/TCP\n    Host Port:      21017/TCP\n    State:          Running\n      Started:      Fri, 31 Jan 2020 14:08:55 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zgzbs (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-zgzbs:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-zgzbs\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason   Age   From                 Message\n  ----    ------   ----  ----                 -------\n  Normal  Pulled   5m5s  kubelet, iruya-node  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n  Normal  Created  5m4s  kubelet, iruya-node  Created container nginx\n  Normal  Started  5m3s  kubelet, iruya-node  Started container nginx\n"
Jan 31 14:13:58.670: INFO: 
Output of kubectl describe test-pod:
Name:         test-pod
Namespace:    statefulset-7874
Priority:     0
Node:         iruya-node/10.96.3.65
Start Time:   Fri, 31 Jan 2020 14:08:48 +0000
Labels:       
Annotations:  
Status:       Running
IP:           10.44.0.1
Containers:
  nginx:
    Container ID:   docker://9e6ee46ead9b97f4a81cbe7e1eaa236558284f10aae24378eef15fbb417aaab3
    Image:          docker.io/library/nginx:1.14-alpine
    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
    Port:           21017/TCP
    Host Port:      21017/TCP
    State:          Running
      Started:      Fri, 31 Jan 2020 14:08:55 +0000
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zgzbs (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-zgzbs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-zgzbs
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age   From                 Message
  ----    ------   ----  ----                 -------
  Normal  Pulled   5m5s  kubelet, iruya-node  Container image "docker.io/library/nginx:1.14-alpine" already present on machine
  Normal  Created  5m4s  kubelet, iruya-node  Created container nginx
  Normal  Started  5m3s  kubelet, iruya-node  Started container nginx

Jan 31 14:13:58.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-7874 --tail=100'
Jan 31 14:13:58.807: INFO: stderr: ""
Jan 31 14:13:58.808: INFO: stdout: ""
Jan 31 14:13:58.808: INFO: 
Last 100 log lines of test-pod:

Jan 31 14:13:58.808: INFO: Deleting all statefulset in ns statefulset-7874
Jan 31 14:13:58.815: INFO: Scaling statefulset ss to 0
Jan 31 14:14:08.859: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 14:14:08.866: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Collecting events from namespace "statefulset-7874".
STEP: Found 11 events.
Jan 31 14:14:08.906: INFO: At 2020-01-31 14:08:48 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-7874/ss is recreating failed Pod ss-0
Jan 31 14:14:08.906: INFO: At 2020-01-31 14:08:48 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Jan 31 14:14:08.906: INFO: At 2020-01-31 14:08:48 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
Jan 31 14:14:08.906: INFO: At 2020-01-31 14:08:48 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 31 14:14:08.906: INFO: At 2020-01-31 14:08:48 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 31 14:14:08.906: INFO: At 2020-01-31 14:08:49 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 31 14:14:08.906: INFO: At 2020-01-31 14:08:49 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 31 14:14:08.906: INFO: At 2020-01-31 14:08:50 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan 31 14:14:08.906: INFO: At 2020-01-31 14:08:53 +0000 UTC - event for test-pod: {kubelet iruya-node} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine
Jan 31 14:14:08.906: INFO: At 2020-01-31 14:08:54 +0000 UTC - event for test-pod: {kubelet iruya-node} Created: Created container nginx
Jan 31 14:14:08.906: INFO: At 2020-01-31 14:08:55 +0000 UTC - event for test-pod: {kubelet iruya-node} Started: Started container nginx
Jan 31 14:14:08.913: INFO: POD       NODE        PHASE    GRACE  CONDITIONS
Jan 31 14:14:08.913: INFO: test-pod  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:08:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:08:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:08:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:08:48 +0000 UTC  }]
Jan 31 14:14:08.913: INFO: 
Jan 31 14:14:08.928: INFO: 
Logging node info for node iruya-node
Jan 31 14:14:08.935: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-node,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-node,UID:b2aa273d-23ea-4c86-9e2f-72569e3392bd,ResourceVersion:22571804,Generation:0,CreationTimestamp:2019-08-04 09:01:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-node,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-10-12 11:56:49 +0000 UTC 2019-10-12 11:56:49 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-31 14:13:31 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-31 14:13:31 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-31 14:13:31 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-31 14:13:31 +0000 UTC 2019-08-04 09:02:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.3.65} {Hostname iruya-node}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f573dcf04d6f4a87856a35d266a2fa7a,SystemUUID:F573DCF0-4D6F-4A87-856A-35D266A2FA7A,BootID:8baf4beb-8391-43e6-b17b-b1e184b5370a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15] 246640776} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 61365829} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0] 11443478} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest] 5496756} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e busybox:latest] 1219782} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jan 31 14:14:08.936: INFO: 
Logging kubelet events for node iruya-node
Jan 31 14:14:08.941: INFO: 
Logging pods the kubelet thinks is on node iruya-node
Jan 31 14:14:08.956: INFO: weave-net-rlp57 started at 2019-10-12 11:56:39 +0000 UTC (0+2 container statuses recorded)
Jan 31 14:14:08.956: INFO: 	Container weave ready: true, restart count 0
Jan 31 14:14:08.956: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 14:14:08.956: INFO: test-pod started at 2020-01-31 14:08:48 +0000 UTC (0+1 container statuses recorded)
Jan 31 14:14:08.956: INFO: 	Container nginx ready: true, restart count 0
Jan 31 14:14:08.956: INFO: kube-proxy-976zl started at 2019-08-04 09:01:39 +0000 UTC (0+1 container statuses recorded)
Jan 31 14:14:08.956: INFO: 	Container kube-proxy ready: true, restart count 0
W0131 14:14:08.990241       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 14:14:09.066: INFO: 
Latency metrics for node iruya-node
Jan 31 14:14:09.066: INFO: 
Logging node info for node iruya-server-sfge57q7djm7
Jan 31 14:14:09.076: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-server-sfge57q7djm7,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-server-sfge57q7djm7,UID:67f2a658-4743-4118-95e7-463a23bcd212,ResourceVersion:22571798,Generation:0,CreationTimestamp:2019-08-04 08:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-server-sfge57q7djm7,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:53:00 +0000 UTC 2019-08-04 08:53:00 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-31 14:13:26 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-31 14:13:26 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-31 14:13:26 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-31 14:13:26 +0000 UTC 2019-08-04 08:53:09 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.2.216} {Hostname iruya-server-sfge57q7djm7}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78bacef342604a51913cae58dd95802b,SystemUUID:78BACEF3-4260-4A51-913C-AE58DD95802B,BootID:db143d3a-01b3-4483-b23e-e72adff2b28d,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/kube-apiserver@sha256:304a1c38707834062ee87df62ef329d52a8b9a3e70459565d0a396479073f54c k8s.gcr.io/kube-apiserver:v1.15.1] 206827454} {[k8s.gcr.io/kube-controller-manager@sha256:9abae95e428e228fe8f6d1630d55e79e018037460f3731312805c0f37471e4bf k8s.gcr.io/kube-controller-manager:v1.15.1] 158722622} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[k8s.gcr.io/kube-scheduler@sha256:d0ee18a9593013fbc44b1920e4930f29b664b59a3958749763cb33b57e0e8956 k8s.gcr.io/kube-scheduler:v1.15.1] 81107582} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jan 31 14:14:09.077: INFO: 
Logging kubelet events for node iruya-server-sfge57q7djm7
Jan 31 14:14:09.083: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7
Jan 31 14:14:09.095: INFO: kube-scheduler-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:43 +0000 UTC (0+1 container statuses recorded)
Jan 31 14:14:09.095: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 31 14:14:09.095: INFO: coredns-5c98db65d4-xx8w8 started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Jan 31 14:14:09.095: INFO: 	Container coredns ready: true, restart count 0
Jan 31 14:14:09.095: INFO: etcd-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:38 +0000 UTC (0+1 container statuses recorded)
Jan 31 14:14:09.095: INFO: 	Container etcd ready: true, restart count 0
Jan 31 14:14:09.095: INFO: weave-net-bzl4d started at 2019-08-04 08:52:37 +0000 UTC (0+2 container statuses recorded)
Jan 31 14:14:09.095: INFO: 	Container weave ready: true, restart count 0
Jan 31 14:14:09.095: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 14:14:09.095: INFO: coredns-5c98db65d4-bm4gs started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Jan 31 14:14:09.095: INFO: 	Container coredns ready: true, restart count 0
Jan 31 14:14:09.095: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:42 +0000 UTC (0+1 container statuses recorded)
Jan 31 14:14:09.095: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 31 14:14:09.095: INFO: kube-proxy-58v95 started at 2019-08-04 08:52:37 +0000 UTC (0+1 container statuses recorded)
Jan 31 14:14:09.095: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 14:14:09.095: INFO: kube-apiserver-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:39 +0000 UTC (0+1 container statuses recorded)
Jan 31 14:14:09.095: INFO: 	Container kube-apiserver ready: true, restart count 0
W0131 14:14:09.105664       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 14:14:09.158: INFO: 
Latency metrics for node iruya-server-sfge57q7djm7
Jan 31 14:14:09.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7874" for this suite.
Jan 31 14:14:31.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:14:31.373: INFO: namespace statefulset-7874 deletion completed in 22.208701901s

• Failure [343.854 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697

    Jan 31 14:13:58.053: Pod ss-0 expected to be re-created at least once

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:14:31.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7172.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7172.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 14:14:43.616: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-7172/dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053: the server could not find the requested resource (get pods dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053)
Jan 31 14:14:43.706: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-7172/dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053: the server could not find the requested resource (get pods dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053)
Jan 31 14:14:43.715: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7172/dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053: the server could not find the requested resource (get pods dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053)
Jan 31 14:14:43.722: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7172/dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053: the server could not find the requested resource (get pods dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053)
Jan 31 14:14:43.732: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-7172/dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053: the server could not find the requested resource (get pods dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053)
Jan 31 14:14:43.736: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-7172/dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053: the server could not find the requested resource (get pods dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053)
Jan 31 14:14:43.740: INFO: Unable to read jessie_udp@PodARecord from pod dns-7172/dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053: the server could not find the requested resource (get pods dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053)
Jan 31 14:14:43.744: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7172/dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053: the server could not find the requested resource (get pods dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053)
Jan 31 14:14:43.744: INFO: Lookups using dns-7172/dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 31 14:14:48.800: INFO: DNS probes using dns-7172/dns-test-8a90cbf0-5c7b-4d19-bd36-92e15a06b053 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:14:48.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7172" for this suite.
Jan 31 14:14:54.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:14:55.122: INFO: namespace dns-7172 deletion completed in 6.171809698s

• [SLOW TEST:23.748 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:14:55.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:15:05.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7973" for this suite.
Jan 31 14:15:11.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:15:11.679: INFO: namespace emptydir-wrapper-7973 deletion completed in 6.181856069s

• [SLOW TEST:16.556 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:15:11.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 31 14:15:11.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:15:20.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9796" for this suite.
Jan 31 14:16:12.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:16:12.406: INFO: namespace pods-9796 deletion completed in 52.180731675s

• [SLOW TEST:60.727 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:16:12.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:16:21.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2917" for this suite.
Jan 31 14:16:43.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:16:43.728: INFO: namespace replication-controller-2917 deletion completed in 22.165466594s

• [SLOW TEST:31.321 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:16:43.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-5abcc0d1-6415-4b66-82d2-b81044bd642e
STEP: Creating a pod to test consume configMaps
Jan 31 14:16:43.921: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-539e6de6-969d-4f6e-a36d-fa247b3a2e96" in namespace "projected-9712" to be "success or failure"
Jan 31 14:16:43.932: INFO: Pod "pod-projected-configmaps-539e6de6-969d-4f6e-a36d-fa247b3a2e96": Phase="Pending", Reason="", readiness=false. Elapsed: 10.708728ms
Jan 31 14:16:45.945: INFO: Pod "pod-projected-configmaps-539e6de6-969d-4f6e-a36d-fa247b3a2e96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02419806s
Jan 31 14:16:47.962: INFO: Pod "pod-projected-configmaps-539e6de6-969d-4f6e-a36d-fa247b3a2e96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041571832s
Jan 31 14:16:49.970: INFO: Pod "pod-projected-configmaps-539e6de6-969d-4f6e-a36d-fa247b3a2e96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049160482s
Jan 31 14:16:52.000: INFO: Pod "pod-projected-configmaps-539e6de6-969d-4f6e-a36d-fa247b3a2e96": Phase="Running", Reason="", readiness=true. Elapsed: 8.079324022s
Jan 31 14:16:54.014: INFO: Pod "pod-projected-configmaps-539e6de6-969d-4f6e-a36d-fa247b3a2e96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092734988s
STEP: Saw pod success
Jan 31 14:16:54.014: INFO: Pod "pod-projected-configmaps-539e6de6-969d-4f6e-a36d-fa247b3a2e96" satisfied condition "success or failure"
Jan 31 14:16:54.018: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-539e6de6-969d-4f6e-a36d-fa247b3a2e96 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 14:16:54.324: INFO: Waiting for pod pod-projected-configmaps-539e6de6-969d-4f6e-a36d-fa247b3a2e96 to disappear
Jan 31 14:16:54.329: INFO: Pod pod-projected-configmaps-539e6de6-969d-4f6e-a36d-fa247b3a2e96 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:16:54.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9712" for this suite.
Jan 31 14:17:00.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:17:00.590: INFO: namespace projected-9712 deletion completed in 6.250436801s

• [SLOW TEST:16.862 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:17:00.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 31 14:17:00.704: INFO: Waiting up to 5m0s for pod "pod-c5aea1e8-3c7d-4ae8-8318-0b93a427e057" in namespace "emptydir-5960" to be "success or failure"
Jan 31 14:17:00.711: INFO: Pod "pod-c5aea1e8-3c7d-4ae8-8318-0b93a427e057": Phase="Pending", Reason="", readiness=false. Elapsed: 6.449071ms
Jan 31 14:17:02.729: INFO: Pod "pod-c5aea1e8-3c7d-4ae8-8318-0b93a427e057": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024908096s
Jan 31 14:17:04.735: INFO: Pod "pod-c5aea1e8-3c7d-4ae8-8318-0b93a427e057": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030490221s
Jan 31 14:17:07.356: INFO: Pod "pod-c5aea1e8-3c7d-4ae8-8318-0b93a427e057": Phase="Pending", Reason="", readiness=false. Elapsed: 6.651678189s
Jan 31 14:17:09.371: INFO: Pod "pod-c5aea1e8-3c7d-4ae8-8318-0b93a427e057": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.666485744s
STEP: Saw pod success
Jan 31 14:17:09.371: INFO: Pod "pod-c5aea1e8-3c7d-4ae8-8318-0b93a427e057" satisfied condition "success or failure"
Jan 31 14:17:09.381: INFO: Trying to get logs from node iruya-node pod pod-c5aea1e8-3c7d-4ae8-8318-0b93a427e057 container test-container: 
STEP: delete the pod
Jan 31 14:17:09.507: INFO: Waiting for pod pod-c5aea1e8-3c7d-4ae8-8318-0b93a427e057 to disappear
Jan 31 14:17:09.514: INFO: Pod pod-c5aea1e8-3c7d-4ae8-8318-0b93a427e057 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:17:09.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5960" for this suite.
Jan 31 14:17:15.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:17:15.768: INFO: namespace emptydir-5960 deletion completed in 6.246314932s

• [SLOW TEST:15.177 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:17:15.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 31 14:17:16.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-9430'
Jan 31 14:17:18.271: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 14:17:18.271: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Jan 31 14:17:20.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-9430'
Jan 31 14:17:20.536: INFO: stderr: ""
Jan 31 14:17:20.537: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:17:20.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9430" for this suite.
Jan 31 14:17:26.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:17:26.775: INFO: namespace kubectl-9430 deletion completed in 6.227466694s

• [SLOW TEST:11.006 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:17:26.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 31 14:17:26.939: INFO: Waiting up to 5m0s for pod "pod-eaf06191-ad33-45c0-9149-1035abeee2f8" in namespace "emptydir-5626" to be "success or failure"
Jan 31 14:17:26.950: INFO: Pod "pod-eaf06191-ad33-45c0-9149-1035abeee2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.564516ms
Jan 31 14:17:28.956: INFO: Pod "pod-eaf06191-ad33-45c0-9149-1035abeee2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016676976s
Jan 31 14:17:31.012: INFO: Pod "pod-eaf06191-ad33-45c0-9149-1035abeee2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072399817s
Jan 31 14:17:33.025: INFO: Pod "pod-eaf06191-ad33-45c0-9149-1035abeee2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085681452s
Jan 31 14:17:35.038: INFO: Pod "pod-eaf06191-ad33-45c0-9149-1035abeee2f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098912364s
STEP: Saw pod success
Jan 31 14:17:35.038: INFO: Pod "pod-eaf06191-ad33-45c0-9149-1035abeee2f8" satisfied condition "success or failure"
Jan 31 14:17:35.045: INFO: Trying to get logs from node iruya-node pod pod-eaf06191-ad33-45c0-9149-1035abeee2f8 container test-container: 
STEP: delete the pod
Jan 31 14:17:35.210: INFO: Waiting for pod pod-eaf06191-ad33-45c0-9149-1035abeee2f8 to disappear
Jan 31 14:17:35.216: INFO: Pod pod-eaf06191-ad33-45c0-9149-1035abeee2f8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:17:35.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5626" for this suite.
Jan 31 14:17:41.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:17:41.554: INFO: namespace emptydir-5626 deletion completed in 6.329976505s

• [SLOW TEST:14.778 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:17:41.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Jan 31 14:17:41.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8518 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 31 14:17:50.685: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0131 14:17:48.853004    1430 log.go:172] (0xc000a7e0b0) (0xc000a2a320) Create stream\nI0131 14:17:48.853155    1430 log.go:172] (0xc000a7e0b0) (0xc000a2a320) Stream added, broadcasting: 1\nI0131 14:17:48.862078    1430 log.go:172] (0xc000a7e0b0) Reply frame received for 1\nI0131 14:17:48.862201    1430 log.go:172] (0xc000a7e0b0) (0xc0003ee000) Create stream\nI0131 14:17:48.862231    1430 log.go:172] (0xc000a7e0b0) (0xc0003ee000) Stream added, broadcasting: 3\nI0131 14:17:48.864986    1430 log.go:172] (0xc000a7e0b0) Reply frame received for 3\nI0131 14:17:48.865094    1430 log.go:172] (0xc000a7e0b0) (0xc000a52320) Create stream\nI0131 14:17:48.865106    1430 log.go:172] (0xc000a7e0b0) (0xc000a52320) Stream added, broadcasting: 5\nI0131 14:17:48.869766    1430 log.go:172] (0xc000a7e0b0) Reply frame received for 5\nI0131 14:17:48.869801    1430 log.go:172] (0xc000a7e0b0) (0xc000a2a3c0) Create stream\nI0131 14:17:48.869810    1430 log.go:172] (0xc000a7e0b0) (0xc000a2a3c0) Stream added, broadcasting: 7\nI0131 14:17:48.871865    1430 log.go:172] (0xc000a7e0b0) Reply frame received for 7\nI0131 14:17:48.872247    1430 log.go:172] (0xc0003ee000) (3) Writing data frame\nI0131 14:17:48.872439    1430 log.go:172] (0xc0003ee000) (3) Writing data frame\nI0131 14:17:48.880557    1430 log.go:172] (0xc000a7e0b0) Data frame received for 5\nI0131 14:17:48.880587    1430 log.go:172] (0xc000a52320) (5) Data frame handling\nI0131 14:17:48.880607    1430 log.go:172] (0xc000a52320) (5) Data frame sent\nI0131 14:17:48.885097    1430 log.go:172] (0xc000a7e0b0) Data frame received for 5\nI0131 14:17:48.885113    1430 log.go:172] (0xc000a52320) (5) Data frame handling\nI0131 14:17:48.885126    1430 log.go:172] (0xc000a52320) (5) Data frame sent\nI0131 14:17:50.645674    1430 log.go:172] (0xc000a7e0b0) (0xc0003ee000) Stream removed, broadcasting: 3\nI0131 14:17:50.646597    1430 log.go:172] (0xc000a7e0b0) Data frame received for 1\nI0131 14:17:50.646713    1430 log.go:172] (0xc000a2a320) (1) Data frame handling\nI0131 14:17:50.646780    1430 log.go:172] (0xc000a2a320) (1) Data frame sent\nI0131 14:17:50.646831    1430 log.go:172] (0xc000a7e0b0) (0xc000a2a320) Stream removed, broadcasting: 1\nI0131 14:17:50.647259    1430 log.go:172] (0xc000a7e0b0) (0xc000a52320) Stream removed, broadcasting: 5\nI0131 14:17:50.648325    1430 log.go:172] (0xc000a7e0b0) (0xc000a2a3c0) Stream removed, broadcasting: 7\nI0131 14:17:50.648625    1430 log.go:172] (0xc000a7e0b0) (0xc000a2a320) Stream removed, broadcasting: 1\nI0131 14:17:50.648658    1430 log.go:172] (0xc000a7e0b0) (0xc0003ee000) Stream removed, broadcasting: 3\nI0131 14:17:50.648675    1430 log.go:172] (0xc000a7e0b0) (0xc000a52320) Stream removed, broadcasting: 5\nI0131 14:17:50.648696    1430 log.go:172] (0xc000a7e0b0) (0xc000a2a3c0) Stream removed, broadcasting: 7\nI0131 14:17:50.649105    1430 log.go:172] (0xc000a7e0b0) Go away received\n"
Jan 31 14:17:50.685: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:17:52.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8518" for this suite.
Jan 31 14:17:58.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:17:58.885: INFO: namespace kubectl-8518 deletion completed in 6.179527946s

• [SLOW TEST:17.331 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:17:58.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 31 14:17:59.018: INFO: Waiting up to 5m0s for pod "downward-api-32df5d84-facc-4a61-a924-4211637f146d" in namespace "downward-api-22" to be "success or failure"
Jan 31 14:17:59.060: INFO: Pod "downward-api-32df5d84-facc-4a61-a924-4211637f146d": Phase="Pending", Reason="", readiness=false. Elapsed: 40.858986ms
Jan 31 14:18:01.083: INFO: Pod "downward-api-32df5d84-facc-4a61-a924-4211637f146d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063596767s
Jan 31 14:18:03.088: INFO: Pod "downward-api-32df5d84-facc-4a61-a924-4211637f146d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069170693s
Jan 31 14:18:05.096: INFO: Pod "downward-api-32df5d84-facc-4a61-a924-4211637f146d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077289931s
Jan 31 14:18:07.117: INFO: Pod "downward-api-32df5d84-facc-4a61-a924-4211637f146d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098167001s
STEP: Saw pod success
Jan 31 14:18:07.117: INFO: Pod "downward-api-32df5d84-facc-4a61-a924-4211637f146d" satisfied condition "success or failure"
Jan 31 14:18:07.124: INFO: Trying to get logs from node iruya-node pod downward-api-32df5d84-facc-4a61-a924-4211637f146d container dapi-container: 
STEP: delete the pod
Jan 31 14:18:07.249: INFO: Waiting for pod downward-api-32df5d84-facc-4a61-a924-4211637f146d to disappear
Jan 31 14:18:07.262: INFO: Pod downward-api-32df5d84-facc-4a61-a924-4211637f146d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:18:07.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-22" for this suite.
Jan 31 14:18:13.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:18:13.487: INFO: namespace downward-api-22 deletion completed in 6.214685506s

• [SLOW TEST:14.601 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:18:13.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:18:20.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2485" for this suite.
Jan 31 14:18:26.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:18:26.264: INFO: namespace namespaces-2485 deletion completed in 6.23859554s
STEP: Destroying namespace "nsdeletetest-7587" for this suite.
Jan 31 14:18:26.268: INFO: Namespace nsdeletetest-7587 was already deleted
STEP: Destroying namespace "nsdeletetest-2664" for this suite.
Jan 31 14:18:32.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:18:32.581: INFO: namespace nsdeletetest-2664 deletion completed in 6.313397932s

• [SLOW TEST:19.094 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:18:32.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 31 14:18:32.810: INFO: Creating deployment "nginx-deployment"
Jan 31 14:18:32.822: INFO: Waiting for observed generation 1
Jan 31 14:18:35.703: INFO: Waiting for all required pods to come up
Jan 31 14:18:36.253: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 31 14:19:04.343: INFO: Waiting for deployment "nginx-deployment" to complete
Jan 31 14:19:04.353: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan 31 14:19:04.375: INFO: Updating deployment nginx-deployment
Jan 31 14:19:04.375: INFO: Waiting for observed generation 2
Jan 31 14:19:06.617: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 31 14:19:07.080: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 31 14:19:07.212: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 31 14:19:08.504: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 31 14:19:08.505: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 31 14:19:08.513: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 31 14:19:08.523: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan 31 14:19:08.523: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan 31 14:19:08.541: INFO: Updating deployment nginx-deployment
Jan 31 14:19:08.541: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan 31 14:19:08.741: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 31 14:19:13.510: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 31 14:19:15.975: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6252,SelfLink:/apis/apps/v1/namespaces/deployment-6252/deployments/nginx-deployment,UID:6408537a-40e0-4964-aba9-e92d5bb0c2b6,ResourceVersion:22572824,Generation:3,CreationTimestamp:2020-01-31 14:18:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-01-31 14:19:08 +0000 UTC 2020-01-31 14:19:08 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-31 14:19:11 +0000 UTC 2020-01-31 14:18:32 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan 31 14:19:17.164: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6252,SelfLink:/apis/apps/v1/namespaces/deployment-6252/replicasets/nginx-deployment-55fb7cb77f,UID:a4f09219-7ad6-444f-b0b1-4fc23288e206,ResourceVersion:22572816,Generation:3,CreationTimestamp:2020-01-31 14:19:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6408537a-40e0-4964-aba9-e92d5bb0c2b6 0xc00339fcb7 0xc00339fcb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 31 14:19:17.165: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan 31 14:19:17.165: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6252,SelfLink:/apis/apps/v1/namespaces/deployment-6252/replicasets/nginx-deployment-7b8c6f4498,UID:e71fc0d9-d120-4b3f-babb-6817c107dedd,ResourceVersion:22572819,Generation:3,CreationTimestamp:2020-01-31 14:18:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6408537a-40e0-4964-aba9-e92d5bb0c2b6 0xc00339fd87 0xc00339fd88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan 31 14:19:19.179: INFO: Pod "nginx-deployment-55fb7cb77f-8t4wp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8t4wp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-55fb7cb77f-8t4wp,UID:a3458cb6-86b1-4734-91db-caaca457f4b2,ResourceVersion:22572752,Generation:0,CreationTimestamp:2020-01-31 14:19:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a4f09219-7ad6-444f-b0b1-4fc23288e206 0xc002fbb9f0 0xc002fbb9f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fbba70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fbba90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:04 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-31 14:19:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.179: INFO: Pod "nginx-deployment-55fb7cb77f-c9vx9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c9vx9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-55fb7cb77f-c9vx9,UID:b6f6ad4a-bc4a-4856-a7f2-3debe0a9cbac,ResourceVersion:22572844,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a4f09219-7ad6-444f-b0b1-4fc23288e206 0xc002fbbb67 0xc002fbbb68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fbbbe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fbbc00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-31 14:19:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.179: INFO: Pod "nginx-deployment-55fb7cb77f-cfkst" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cfkst,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-55fb7cb77f-cfkst,UID:be904942-7112-4cf8-8ed8-cf711e617be7,ResourceVersion:22572800,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a4f09219-7ad6-444f-b0b1-4fc23288e206 0xc002fbbcd7 0xc002fbbcd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fbbd50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fbbd70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.179: INFO: Pod "nginx-deployment-55fb7cb77f-dg9gx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dg9gx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-55fb7cb77f-dg9gx,UID:813e5dc3-c7f5-4f24-a4fa-80fce15925e1,ResourceVersion:22572760,Generation:0,CreationTimestamp:2020-01-31 14:19:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a4f09219-7ad6-444f-b0b1-4fc23288e206 0xc002fbbdf7 0xc002fbbdf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fbbe70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fbbe90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-31 14:19:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.180: INFO: Pod "nginx-deployment-55fb7cb77f-f57fd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-f57fd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-55fb7cb77f-f57fd,UID:15fda22c-da71-466d-9cab-6bec2a23f621,ResourceVersion:22572812,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a4f09219-7ad6-444f-b0b1-4fc23288e206 0xc002fbbf67 0xc002fbbf68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fbbfd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fbbff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.180: INFO: Pod "nginx-deployment-55fb7cb77f-gl2lx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gl2lx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-55fb7cb77f-gl2lx,UID:b0f57c3c-8d0f-400f-bc7f-f5f27357d59a,ResourceVersion:22572742,Generation:0,CreationTimestamp:2020-01-31 14:19:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a4f09219-7ad6-444f-b0b1-4fc23288e206 0xc002b76077 0xc002b76078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b760f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b76110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:04 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-31 14:19:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.180: INFO: Pod "nginx-deployment-55fb7cb77f-jdwvj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jdwvj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-55fb7cb77f-jdwvj,UID:2a1d7397-a0a4-4005-89df-63e4c6ad31fd,ResourceVersion:22572730,Generation:0,CreationTimestamp:2020-01-31 14:19:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a4f09219-7ad6-444f-b0b1-4fc23288e206 0xc002b761e7 0xc002b761e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b76250} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b76270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:04 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-31 14:19:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.180: INFO: Pod "nginx-deployment-55fb7cb77f-k2qtp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-k2qtp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-55fb7cb77f-k2qtp,UID:95a98ee8-6bf6-4307-9936-6b663074ca16,ResourceVersion:22572753,Generation:0,CreationTimestamp:2020-01-31 14:19:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a4f09219-7ad6-444f-b0b1-4fc23288e206 0xc002b76347 0xc002b76348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b763b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b763d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:05 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-31 14:19:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.180: INFO: Pod "nginx-deployment-55fb7cb77f-rb2j8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rb2j8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-55fb7cb77f-rb2j8,UID:7e740a0b-87f0-441c-9d02-1b62369b7a44,ResourceVersion:22572841,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a4f09219-7ad6-444f-b0b1-4fc23288e206 0xc002b764a7 0xc002b764a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b76520} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b76540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-31 14:19:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.181: INFO: Pod "nginx-deployment-55fb7cb77f-tjqxf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tjqxf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-55fb7cb77f-tjqxf,UID:7c747c51-393b-45b1-8f22-320f0a5949d0,ResourceVersion:22572803,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a4f09219-7ad6-444f-b0b1-4fc23288e206 0xc002b76617 0xc002b76618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b76690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b766b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.181: INFO: Pod "nginx-deployment-55fb7cb77f-vds9k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vds9k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-55fb7cb77f-vds9k,UID:0eb312f7-03c8-42f1-acea-665a5fc64ae2,ResourceVersion:22572802,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a4f09219-7ad6-444f-b0b1-4fc23288e206 0xc002b76737 0xc002b76738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b767b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b767d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.181: INFO: Pod "nginx-deployment-55fb7cb77f-xv2sp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xv2sp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-55fb7cb77f-xv2sp,UID:41917584-13a4-4070-9913-b983784b7c37,ResourceVersion:22572831,Generation:0,CreationTimestamp:2020-01-31 14:19:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a4f09219-7ad6-444f-b0b1-4fc23288e206 0xc002b76857 0xc002b76858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b768c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b768e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-31 14:19:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.181: INFO: Pod "nginx-deployment-55fb7cb77f-zl4bh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zl4bh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-55fb7cb77f-zl4bh,UID:b30c9bdd-eaa8-4e1b-9290-0ab6e65bd11a,ResourceVersion:22572838,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a4f09219-7ad6-444f-b0b1-4fc23288e206 0xc002b769b7 0xc002b769b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b76a20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b76a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-31 14:19:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.181: INFO: Pod "nginx-deployment-7b8c6f4498-2ncv2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2ncv2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-2ncv2,UID:f3200b47-f7be-4f16-9339-632d3cdf1c90,ResourceVersion:22572822,Generation:0,CreationTimestamp:2020-01-31 14:19:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b76b17 0xc002b76b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b76b80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b76ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-31 14:19:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.182: INFO: Pod "nginx-deployment-7b8c6f4498-2t2vs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2t2vs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-2t2vs,UID:cca7f994-e3de-4c95-ad0e-47132a75cf6f,ResourceVersion:22572807,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b76c67 0xc002b76c68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b76ce0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b76d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.182: INFO: Pod "nginx-deployment-7b8c6f4498-5nskz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5nskz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-5nskz,UID:3e0c099f-3731-4b6a-8d79-f9a5b4f598f4,ResourceVersion:22572815,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b76d87 0xc002b76d88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b76df0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b76e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.182: INFO: Pod "nginx-deployment-7b8c6f4498-7m9bn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7m9bn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-7m9bn,UID:26dde545-c499-4cc2-b530-bcc479723708,ResourceVersion:22572801,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b76e97 0xc002b76e98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b76f10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b76f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.183: INFO: Pod "nginx-deployment-7b8c6f4498-98w7c" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-98w7c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-98w7c,UID:0c92cc12-165a-458b-81f9-778dc198abd3,ResourceVersion:22572648,Generation:0,CreationTimestamp:2020-01-31 14:18:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b76fb7 0xc002b76fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b77020} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b77040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-31 14:18:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 14:18:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://944529c246f1a0a71044ccb2e6ae968add318cf8e6f1b2aa9c2943b045567e59}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.183: INFO: Pod "nginx-deployment-7b8c6f4498-b5cbx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b5cbx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-b5cbx,UID:de7d8b28-6d33-4774-8ae0-163b37d1f05f,ResourceVersion:22572698,Generation:0,CreationTimestamp:2020-01-31 14:18:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b77117 0xc002b77118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b771a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b771c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.6,StartTime:2020-01-31 14:18:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 14:19:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fb044b78ad3149e03a573eb0dffb10eec3d212c87f26b05130568eabd5dca32e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.183: INFO: Pod "nginx-deployment-7b8c6f4498-bk98n" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bk98n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-bk98n,UID:b5b908ba-6aaf-4cdb-bf04-f3e182d69862,ResourceVersion:22572689,Generation:0,CreationTimestamp:2020-01-31 14:18:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b77297 0xc002b77298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b77310} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b77330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-31 14:18:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 14:18:57 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9c63d3fdb2826fe3dcf4ce7689bc79e27a94526c94da44c801086e1e26709f02}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.183: INFO: Pod "nginx-deployment-7b8c6f4498-bnxxd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bnxxd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-bnxxd,UID:2a62a8ce-a9b2-4211-93f3-4297772f6517,ResourceVersion:22572654,Generation:0,CreationTimestamp:2020-01-31 14:18:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b77407 0xc002b77408}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b77470} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b77490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-31 14:18:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 14:18:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1beb97d2fb1085a44d5ba6bc65f66f525cd853c8dd7a310d08d8008133e3464e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.183: INFO: Pod "nginx-deployment-7b8c6f4498-fdvzk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fdvzk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-fdvzk,UID:c13bc515-ffe4-43e6-8312-cb125ed24236,ResourceVersion:22572835,Generation:0,CreationTimestamp:2020-01-31 14:19:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b77567 0xc002b77568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b775e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b77600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-31 14:19:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.183: INFO: Pod "nginx-deployment-7b8c6f4498-fqbt4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fqbt4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-fqbt4,UID:e6f74fd9-00ff-49b9-849f-106bd0481497,ResourceVersion:22572823,Generation:0,CreationTimestamp:2020-01-31 14:19:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b776c7 0xc002b776c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b77740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b77760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:08 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-31 14:19:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.183: INFO: Pod "nginx-deployment-7b8c6f4498-hwrhh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hwrhh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-hwrhh,UID:7d952860-1433-40be-a1df-bc210b449e88,ResourceVersion:22572813,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b77827 0xc002b77828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b77890} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b778b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.184: INFO: Pod "nginx-deployment-7b8c6f4498-klp89" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-klp89,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-klp89,UID:6537fb6b-c6da-4b09-99b8-99159f42e77a,ResourceVersion:22572658,Generation:0,CreationTimestamp:2020-01-31 14:18:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b77937 0xc002b77938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b779b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b779d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-31 14:18:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 14:18:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://7658b3769219d2ac33b711b9500f582c80b573143ef2923541857a77c9f17cdf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.184: INFO: Pod "nginx-deployment-7b8c6f4498-l9rb8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l9rb8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-l9rb8,UID:4c493103-87a3-438d-a216-c2a151af6a8e,ResourceVersion:22572799,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b77ab7 0xc002b77ab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b77b30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b77b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.184: INFO: Pod "nginx-deployment-7b8c6f4498-lth27" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lth27,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-lth27,UID:bb4b969d-7398-4cd5-ace0-f8dbcad08574,ResourceVersion:22572804,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b77bd7 0xc002b77bd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b77c40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b77c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.184: INFO: Pod "nginx-deployment-7b8c6f4498-pfczr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pfczr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-pfczr,UID:b9ef8432-0d16-4c7f-8fd7-35de2cff9ac0,ResourceVersion:22572808,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b77ce7 0xc002b77ce8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b77d50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b77d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.184: INFO: Pod "nginx-deployment-7b8c6f4498-q8sc4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q8sc4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-q8sc4,UID:d96a0b4d-e01b-4951-8723-f0b79da2fef4,ResourceVersion:22572814,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b77df7 0xc002b77df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b77e70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b77e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:11 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.184: INFO: Pod "nginx-deployment-7b8c6f4498-t4bkd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t4bkd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-t4bkd,UID:e99ff250-23a6-43e1-bac3-b863ca9da64f,ResourceVersion:22572650,Generation:0,CreationTimestamp:2020-01-31 14:18:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc002b77f17 0xc002b77f18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b77f80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b77fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-31 14:18:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 14:18:52 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4eb557d4e53ad59fec2d79ff447aa0bfe0d40b33b56bfcf27fb890aa77f995a1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.184: INFO: Pod "nginx-deployment-7b8c6f4498-tfgfm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tfgfm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-tfgfm,UID:9c6ec7c9-6e32-4cf5-b165-8b9a2dfc99f7,ResourceVersion:22572683,Generation:0,CreationTimestamp:2020-01-31 14:18:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc000d7a077 0xc000d7a078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d7a130} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d7a170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:33 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-31 14:18:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 14:19:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bfa31c97f4800131ccecb7ca044775cb9cbb54f23e2fd5c60b335f0c396e4dde}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.185: INFO: Pod "nginx-deployment-7b8c6f4498-wrzct" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wrzct,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-wrzct,UID:782ad3c3-9645-4cab-9c57-eb8ab00201ae,ResourceVersion:22572805,Generation:0,CreationTimestamp:2020-01-31 14:19:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc000d7a417 0xc000d7a418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d7a4c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d7a4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:10 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 14:19:19.185: INFO: Pod "nginx-deployment-7b8c6f4498-x9xjv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x9xjv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6252,SelfLink:/api/v1/namespaces/deployment-6252/pods/nginx-deployment-7b8c6f4498-x9xjv,UID:20fbf4ab-c806-46f0-8d93-f5be5c072492,ResourceVersion:22572692,Generation:0,CreationTimestamp:2020-01-31 14:18:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 e71fc0d9-d120-4b3f-babb-6817c107dedd 0xc000d7a657 0xc000d7a658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qkbn6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbn6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbn6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d7a6f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d7a710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:19:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:18:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-31 14:18:32 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 14:19:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f47d3e610f16f9fef9040f8114324b62e407865e898a4b30d06601ffb6afb117}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:19:19.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6252" for this suite.
Jan 31 14:20:04.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:20:04.399: INFO: namespace deployment-6252 deletion completed in 44.332720141s

• [SLOW TEST:91.815 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:20:04.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-c3a10b57-15e7-4e13-b667-b4fe93b4f5bd
STEP: Creating a pod to test consume configMaps
Jan 31 14:20:04.621: INFO: Waiting up to 5m0s for pod "pod-configmaps-488208cc-9dfe-41da-93c4-503684835f9e" in namespace "configmap-7698" to be "success or failure"
Jan 31 14:20:04.716: INFO: Pod "pod-configmaps-488208cc-9dfe-41da-93c4-503684835f9e": Phase="Pending", Reason="", readiness=false. Elapsed: 94.961938ms
Jan 31 14:20:06.735: INFO: Pod "pod-configmaps-488208cc-9dfe-41da-93c4-503684835f9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114183785s
Jan 31 14:20:08.759: INFO: Pod "pod-configmaps-488208cc-9dfe-41da-93c4-503684835f9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137705251s
Jan 31 14:20:10.766: INFO: Pod "pod-configmaps-488208cc-9dfe-41da-93c4-503684835f9e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14512019s
Jan 31 14:20:12.786: INFO: Pod "pod-configmaps-488208cc-9dfe-41da-93c4-503684835f9e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.165607747s
Jan 31 14:20:14.799: INFO: Pod "pod-configmaps-488208cc-9dfe-41da-93c4-503684835f9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.177787764s
STEP: Saw pod success
Jan 31 14:20:14.799: INFO: Pod "pod-configmaps-488208cc-9dfe-41da-93c4-503684835f9e" satisfied condition "success or failure"
Jan 31 14:20:14.803: INFO: Trying to get logs from node iruya-node pod pod-configmaps-488208cc-9dfe-41da-93c4-503684835f9e container configmap-volume-test: 
STEP: delete the pod
Jan 31 14:20:14.888: INFO: Waiting for pod pod-configmaps-488208cc-9dfe-41da-93c4-503684835f9e to disappear
Jan 31 14:20:14.893: INFO: Pod pod-configmaps-488208cc-9dfe-41da-93c4-503684835f9e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:20:14.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7698" for this suite.
Jan 31 14:20:20.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:20:21.079: INFO: namespace configmap-7698 deletion completed in 6.179697881s

• [SLOW TEST:16.680 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:20:21.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:20:29.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6594" for this suite.
Jan 31 14:20:35.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:20:35.571: INFO: namespace kubelet-test-6594 deletion completed in 6.234577505s

• [SLOW TEST:14.492 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:20:35.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-394e722e-e315-4713-97a5-2b04dd8aa2dc
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:20:35.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1089" for this suite.
Jan 31 14:20:41.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:20:41.846: INFO: namespace secrets-1089 deletion completed in 6.170535701s

• [SLOW TEST:6.274 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:20:41.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:20:42.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5835" for this suite.
Jan 31 14:21:04.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:21:04.165: INFO: namespace pods-5835 deletion completed in 22.129423373s

• [SLOW TEST:22.319 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:21:04.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan 31 14:21:12.415: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan 31 14:21:22.639: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:21:22.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1150" for this suite.
Jan 31 14:21:28.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:21:28.805: INFO: namespace pods-1150 deletion completed in 6.146609073s

• [SLOW TEST:24.639 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:21:28.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-2ad3bbd2-2e97-4fa0-98cb-54977ea4410b
STEP: Creating a pod to test consume secrets
Jan 31 14:21:28.943: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-636f47f6-7248-421d-b98f-08683af0dc53" in namespace "projected-1296" to be "success or failure"
Jan 31 14:21:28.968: INFO: Pod "pod-projected-secrets-636f47f6-7248-421d-b98f-08683af0dc53": Phase="Pending", Reason="", readiness=false. Elapsed: 25.521474ms
Jan 31 14:21:30.978: INFO: Pod "pod-projected-secrets-636f47f6-7248-421d-b98f-08683af0dc53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035037103s
Jan 31 14:21:33.014: INFO: Pod "pod-projected-secrets-636f47f6-7248-421d-b98f-08683af0dc53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070712278s
Jan 31 14:21:35.027: INFO: Pod "pod-projected-secrets-636f47f6-7248-421d-b98f-08683af0dc53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083883063s
Jan 31 14:21:37.035: INFO: Pod "pod-projected-secrets-636f47f6-7248-421d-b98f-08683af0dc53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092531032s
STEP: Saw pod success
Jan 31 14:21:37.036: INFO: Pod "pod-projected-secrets-636f47f6-7248-421d-b98f-08683af0dc53" satisfied condition "success or failure"
Jan 31 14:21:37.040: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-636f47f6-7248-421d-b98f-08683af0dc53 container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 14:21:37.145: INFO: Waiting for pod pod-projected-secrets-636f47f6-7248-421d-b98f-08683af0dc53 to disappear
Jan 31 14:21:37.151: INFO: Pod pod-projected-secrets-636f47f6-7248-421d-b98f-08683af0dc53 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:21:37.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1296" for this suite.
Jan 31 14:21:43.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:21:43.410: INFO: namespace projected-1296 deletion completed in 6.252697883s

• [SLOW TEST:14.604 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:21:43.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Jan 31 14:21:43.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3032'
Jan 31 14:21:43.935: INFO: stderr: ""
Jan 31 14:21:43.936: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Jan 31 14:21:44.950: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:21:44.951: INFO: Found 0 / 1
Jan 31 14:21:45.944: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:21:45.945: INFO: Found 0 / 1
Jan 31 14:21:46.946: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:21:46.946: INFO: Found 0 / 1
Jan 31 14:21:47.943: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:21:47.943: INFO: Found 0 / 1
Jan 31 14:21:48.948: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:21:48.948: INFO: Found 0 / 1
Jan 31 14:21:49.953: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:21:49.954: INFO: Found 0 / 1
Jan 31 14:21:50.961: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:21:50.961: INFO: Found 1 / 1
Jan 31 14:21:50.961: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 31 14:21:50.965: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:21:50.965: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan 31 14:21:50.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7prxp redis-master --namespace=kubectl-3032'
Jan 31 14:21:51.137: INFO: stderr: ""
Jan 31 14:21:51.137: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 31 Jan 14:21:50.217 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 31 Jan 14:21:50.218 # Server started, Redis version 3.2.12\n1:M 31 Jan 14:21:50.218 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 31 Jan 14:21:50.218 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan 31 14:21:51.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7prxp redis-master --namespace=kubectl-3032 --tail=1'
Jan 31 14:21:51.289: INFO: stderr: ""
Jan 31 14:21:51.289: INFO: stdout: "1:M 31 Jan 14:21:50.218 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan 31 14:21:51.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7prxp redis-master --namespace=kubectl-3032 --limit-bytes=1'
Jan 31 14:21:51.436: INFO: stderr: ""
Jan 31 14:21:51.436: INFO: stdout: " "
STEP: exposing timestamps
Jan 31 14:21:51.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7prxp redis-master --namespace=kubectl-3032 --tail=1 --timestamps'
Jan 31 14:21:51.547: INFO: stderr: ""
Jan 31 14:21:51.548: INFO: stdout: "2020-01-31T14:21:50.218652115Z 1:M 31 Jan 14:21:50.218 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan 31 14:21:54.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7prxp redis-master --namespace=kubectl-3032 --since=1s'
Jan 31 14:21:54.181: INFO: stderr: ""
Jan 31 14:21:54.181: INFO: stdout: ""
Jan 31 14:21:54.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7prxp redis-master --namespace=kubectl-3032 --since=24h'
Jan 31 14:21:54.334: INFO: stderr: ""
Jan 31 14:21:54.334: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 31 Jan 14:21:50.217 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 31 Jan 14:21:50.218 # Server started, Redis version 3.2.12\n1:M 31 Jan 14:21:50.218 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 31 Jan 14:21:50.218 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Jan 31 14:21:54.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3032'
Jan 31 14:21:54.445: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 14:21:54.445: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan 31 14:21:54.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-3032'
Jan 31 14:21:54.692: INFO: stderr: "No resources found.\n"
Jan 31 14:21:54.692: INFO: stdout: ""
Jan 31 14:21:54.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-3032 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 14:21:54.964: INFO: stderr: ""
Jan 31 14:21:54.964: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:21:54.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3032" for this suite.
Jan 31 14:22:16.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:22:17.076: INFO: namespace kubectl-3032 deletion completed in 22.102939853s

• [SLOW TEST:33.666 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:22:17.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-1474a28c-aeee-4f0b-86fc-2e55bb16d6ab
STEP: Creating a pod to test consume secrets
Jan 31 14:22:17.173: INFO: Waiting up to 5m0s for pod "pod-secrets-769ea822-2356-42b8-b638-cf5754a94f6c" in namespace "secrets-9454" to be "success or failure"
Jan 31 14:22:17.180: INFO: Pod "pod-secrets-769ea822-2356-42b8-b638-cf5754a94f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.772857ms
Jan 31 14:22:19.192: INFO: Pod "pod-secrets-769ea822-2356-42b8-b638-cf5754a94f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018559645s
Jan 31 14:22:21.200: INFO: Pod "pod-secrets-769ea822-2356-42b8-b638-cf5754a94f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02660347s
Jan 31 14:22:23.209: INFO: Pod "pod-secrets-769ea822-2356-42b8-b638-cf5754a94f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035659635s
Jan 31 14:22:25.222: INFO: Pod "pod-secrets-769ea822-2356-42b8-b638-cf5754a94f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049005167s
Jan 31 14:22:27.230: INFO: Pod "pod-secrets-769ea822-2356-42b8-b638-cf5754a94f6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056649177s
STEP: Saw pod success
Jan 31 14:22:27.230: INFO: Pod "pod-secrets-769ea822-2356-42b8-b638-cf5754a94f6c" satisfied condition "success or failure"
Jan 31 14:22:27.233: INFO: Trying to get logs from node iruya-node pod pod-secrets-769ea822-2356-42b8-b638-cf5754a94f6c container secret-volume-test: 
STEP: delete the pod
Jan 31 14:22:27.324: INFO: Waiting for pod pod-secrets-769ea822-2356-42b8-b638-cf5754a94f6c to disappear
Jan 31 14:22:27.335: INFO: Pod pod-secrets-769ea822-2356-42b8-b638-cf5754a94f6c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:22:27.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9454" for this suite.
Jan 31 14:22:33.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:22:33.556: INFO: namespace secrets-9454 deletion completed in 6.210090114s

• [SLOW TEST:16.479 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:22:33.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:22:34.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5224" for this suite.
Jan 31 14:22:40.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:22:40.203: INFO: namespace kubelet-test-5224 deletion completed in 6.120402923s

• [SLOW TEST:6.646 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:22:40.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 31 14:22:40.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1662'
Jan 31 14:22:40.521: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 31 14:22:40.521: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: rolling-update to same image controller
Jan 31 14:22:40.596: INFO: scanned /root for discovery docs: 
Jan 31 14:22:40.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1662'
Jan 31 14:23:02.900: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 31 14:23:02.901: INFO: stdout: "Created e2e-test-nginx-rc-7a9d36337d99e9786b3d926485bcd126\nScaling up e2e-test-nginx-rc-7a9d36337d99e9786b3d926485bcd126 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-7a9d36337d99e9786b3d926485bcd126 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-7a9d36337d99e9786b3d926485bcd126 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 31 14:23:02.901: INFO: stdout: "Created e2e-test-nginx-rc-7a9d36337d99e9786b3d926485bcd126\nScaling up e2e-test-nginx-rc-7a9d36337d99e9786b3d926485bcd126 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-7a9d36337d99e9786b3d926485bcd126 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-7a9d36337d99e9786b3d926485bcd126 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 31 14:23:02.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-1662'
Jan 31 14:23:03.047: INFO: stderr: ""
Jan 31 14:23:03.048: INFO: stdout: "e2e-test-nginx-rc-7a9d36337d99e9786b3d926485bcd126-wf5zm "
Jan 31 14:23:03.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-7a9d36337d99e9786b3d926485bcd126-wf5zm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1662'
Jan 31 14:23:03.182: INFO: stderr: ""
Jan 31 14:23:03.183: INFO: stdout: "true"
Jan 31 14:23:03.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-7a9d36337d99e9786b3d926485bcd126-wf5zm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1662'
Jan 31 14:23:03.272: INFO: stderr: ""
Jan 31 14:23:03.272: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 31 14:23:03.272: INFO: e2e-test-nginx-rc-7a9d36337d99e9786b3d926485bcd126-wf5zm is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Jan 31 14:23:03.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1662'
Jan 31 14:23:03.369: INFO: stderr: ""
Jan 31 14:23:03.369: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:23:03.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1662" for this suite.
Jan 31 14:23:21.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:23:21.609: INFO: namespace kubectl-1662 deletion completed in 18.153648312s

• [SLOW TEST:41.406 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:23:21.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jan 31 14:23:21.701: INFO: Waiting up to 5m0s for pod "var-expansion-c2ff5f44-ae80-4c59-9eb7-441621a968e6" in namespace "var-expansion-1778" to be "success or failure"
Jan 31 14:23:21.710: INFO: Pod "var-expansion-c2ff5f44-ae80-4c59-9eb7-441621a968e6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.244732ms
Jan 31 14:23:23.718: INFO: Pod "var-expansion-c2ff5f44-ae80-4c59-9eb7-441621a968e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016392237s
Jan 31 14:23:25.731: INFO: Pod "var-expansion-c2ff5f44-ae80-4c59-9eb7-441621a968e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029416933s
Jan 31 14:23:27.742: INFO: Pod "var-expansion-c2ff5f44-ae80-4c59-9eb7-441621a968e6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040762812s
Jan 31 14:23:29.750: INFO: Pod "var-expansion-c2ff5f44-ae80-4c59-9eb7-441621a968e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048730533s
STEP: Saw pod success
Jan 31 14:23:29.750: INFO: Pod "var-expansion-c2ff5f44-ae80-4c59-9eb7-441621a968e6" satisfied condition "success or failure"
Jan 31 14:23:29.755: INFO: Trying to get logs from node iruya-node pod var-expansion-c2ff5f44-ae80-4c59-9eb7-441621a968e6 container dapi-container: 
STEP: delete the pod
Jan 31 14:23:29.857: INFO: Waiting for pod var-expansion-c2ff5f44-ae80-4c59-9eb7-441621a968e6 to disappear
Jan 31 14:23:29.868: INFO: Pod var-expansion-c2ff5f44-ae80-4c59-9eb7-441621a968e6 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:23:29.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1778" for this suite.
Jan 31 14:23:35.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:23:36.131: INFO: namespace var-expansion-1778 deletion completed in 6.253317181s

• [SLOW TEST:14.522 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:23:36.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan 31 14:23:36.309: INFO: Waiting up to 5m0s for pod "client-containers-3b214231-cf94-4512-b8da-f2150af1b470" in namespace "containers-3340" to be "success or failure"
Jan 31 14:23:36.401: INFO: Pod "client-containers-3b214231-cf94-4512-b8da-f2150af1b470": Phase="Pending", Reason="", readiness=false. Elapsed: 92.155008ms
Jan 31 14:23:38.411: INFO: Pod "client-containers-3b214231-cf94-4512-b8da-f2150af1b470": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102367571s
Jan 31 14:23:40.418: INFO: Pod "client-containers-3b214231-cf94-4512-b8da-f2150af1b470": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109176721s
Jan 31 14:23:42.425: INFO: Pod "client-containers-3b214231-cf94-4512-b8da-f2150af1b470": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116366862s
Jan 31 14:23:44.437: INFO: Pod "client-containers-3b214231-cf94-4512-b8da-f2150af1b470": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.128191002s
STEP: Saw pod success
Jan 31 14:23:44.437: INFO: Pod "client-containers-3b214231-cf94-4512-b8da-f2150af1b470" satisfied condition "success or failure"
Jan 31 14:23:44.440: INFO: Trying to get logs from node iruya-node pod client-containers-3b214231-cf94-4512-b8da-f2150af1b470 container test-container: 
STEP: delete the pod
Jan 31 14:23:44.541: INFO: Waiting for pod client-containers-3b214231-cf94-4512-b8da-f2150af1b470 to disappear
Jan 31 14:23:44.552: INFO: Pod client-containers-3b214231-cf94-4512-b8da-f2150af1b470 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:23:44.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3340" for this suite.
Jan 31 14:23:50.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:23:50.811: INFO: namespace containers-3340 deletion completed in 6.247166476s

• [SLOW TEST:14.679 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:23:50.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-32eb504a-4f0f-4d6f-94bb-4769186b6e5e
STEP: Creating secret with name s-test-opt-upd-bfeac39e-e4f0-4d09-a72c-8bbdaa0c12d5
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-32eb504a-4f0f-4d6f-94bb-4769186b6e5e
STEP: Updating secret s-test-opt-upd-bfeac39e-e4f0-4d09-a72c-8bbdaa0c12d5
STEP: Creating secret with name s-test-opt-create-b1567c5f-91c9-47d8-91c2-e3d7d13548e1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:24:05.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2350" for this suite.
Jan 31 14:24:29.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:24:29.452: INFO: namespace projected-2350 deletion completed in 24.198671893s

• [SLOW TEST:38.640 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:24:29.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-7f1745b9-b628-46b0-ab88-bd86b0142202 in namespace container-probe-8491
Jan 31 14:24:39.570: INFO: Started pod liveness-7f1745b9-b628-46b0-ab88-bd86b0142202 in namespace container-probe-8491
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 14:24:39.575: INFO: Initial restart count of pod liveness-7f1745b9-b628-46b0-ab88-bd86b0142202 is 0
Jan 31 14:25:03.733: INFO: Restart count of pod container-probe-8491/liveness-7f1745b9-b628-46b0-ab88-bd86b0142202 is now 1 (24.158034245s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:25:03.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8491" for this suite.
Jan 31 14:25:09.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:25:10.018: INFO: namespace container-probe-8491 deletion completed in 6.226687207s

• [SLOW TEST:40.565 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:25:10.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan 31 14:25:10.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8616'
Jan 31 14:25:10.424: INFO: stderr: ""
Jan 31 14:25:10.424: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 14:25:10.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8616'
Jan 31 14:25:10.601: INFO: stderr: ""
Jan 31 14:25:10.602: INFO: stdout: "update-demo-nautilus-snbv7 update-demo-nautilus-wgx72 "
Jan 31 14:25:10.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-snbv7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8616'
Jan 31 14:25:10.751: INFO: stderr: ""
Jan 31 14:25:10.751: INFO: stdout: ""
Jan 31 14:25:10.751: INFO: update-demo-nautilus-snbv7 is created but not running
Jan 31 14:25:15.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8616'
Jan 31 14:25:17.425: INFO: stderr: ""
Jan 31 14:25:17.425: INFO: stdout: "update-demo-nautilus-snbv7 update-demo-nautilus-wgx72 "
Jan 31 14:25:17.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-snbv7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8616'
Jan 31 14:25:17.731: INFO: stderr: ""
Jan 31 14:25:17.731: INFO: stdout: ""
Jan 31 14:25:17.731: INFO: update-demo-nautilus-snbv7 is created but not running
Jan 31 14:25:22.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8616'
Jan 31 14:25:22.977: INFO: stderr: ""
Jan 31 14:25:22.977: INFO: stdout: "update-demo-nautilus-snbv7 update-demo-nautilus-wgx72 "
Jan 31 14:25:22.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-snbv7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8616'
Jan 31 14:25:23.099: INFO: stderr: ""
Jan 31 14:25:23.099: INFO: stdout: "true"
Jan 31 14:25:23.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-snbv7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8616'
Jan 31 14:25:23.197: INFO: stderr: ""
Jan 31 14:25:23.197: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 14:25:23.197: INFO: validating pod update-demo-nautilus-snbv7
Jan 31 14:25:23.211: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 14:25:23.211: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 14:25:23.211: INFO: update-demo-nautilus-snbv7 is verified up and running
Jan 31 14:25:23.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgx72 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8616'
Jan 31 14:25:23.309: INFO: stderr: ""
Jan 31 14:25:23.309: INFO: stdout: "true"
Jan 31 14:25:23.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgx72 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8616'
Jan 31 14:25:23.414: INFO: stderr: ""
Jan 31 14:25:23.414: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 14:25:23.414: INFO: validating pod update-demo-nautilus-wgx72
Jan 31 14:25:23.420: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 14:25:23.421: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 14:25:23.421: INFO: update-demo-nautilus-wgx72 is verified up and running
STEP: using delete to clean up resources
Jan 31 14:25:23.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8616'
Jan 31 14:25:23.520: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 14:25:23.520: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 31 14:25:23.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8616'
Jan 31 14:25:23.738: INFO: stderr: "No resources found.\n"
Jan 31 14:25:23.738: INFO: stdout: ""
Jan 31 14:25:23.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8616 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 14:25:23.851: INFO: stderr: ""
Jan 31 14:25:23.851: INFO: stdout: "update-demo-nautilus-snbv7\nupdate-demo-nautilus-wgx72\n"
Jan 31 14:25:24.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8616'
Jan 31 14:25:24.579: INFO: stderr: "No resources found.\n"
Jan 31 14:25:24.580: INFO: stdout: ""
Jan 31 14:25:24.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8616 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 14:25:24.699: INFO: stderr: ""
Jan 31 14:25:24.699: INFO: stdout: "update-demo-nautilus-snbv7\nupdate-demo-nautilus-wgx72\n"
Jan 31 14:25:24.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8616'
Jan 31 14:25:25.108: INFO: stderr: "No resources found.\n"
Jan 31 14:25:25.109: INFO: stdout: ""
Jan 31 14:25:25.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8616 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 14:25:25.248: INFO: stderr: ""
Jan 31 14:25:25.248: INFO: stdout: "update-demo-nautilus-snbv7\nupdate-demo-nautilus-wgx72\n"
Jan 31 14:25:25.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8616'
Jan 31 14:25:25.707: INFO: stderr: "No resources found.\n"
Jan 31 14:25:25.707: INFO: stdout: ""
Jan 31 14:25:25.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8616 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 31 14:25:25.871: INFO: stderr: ""
Jan 31 14:25:25.871: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:25:25.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8616" for this suite.
Jan 31 14:25:48.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:25:48.927: INFO: namespace kubectl-8616 deletion completed in 23.043695172s

• [SLOW TEST:38.909 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:25:48.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4663
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-4663
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4663
Jan 31 14:25:49.404: INFO: Found 0 stateful pods, waiting for 1
Jan 31 14:25:59.413: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 31 14:25:59.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 14:26:00.047: INFO: stderr: "I0131 14:25:59.634079    2141 log.go:172] (0xc0008d00b0) (0xc0006f45a0) Create stream\nI0131 14:25:59.634265    2141 log.go:172] (0xc0008d00b0) (0xc0006f45a0) Stream added, broadcasting: 1\nI0131 14:25:59.644051    2141 log.go:172] (0xc0008d00b0) Reply frame received for 1\nI0131 14:25:59.644082    2141 log.go:172] (0xc0008d00b0) (0xc00076e140) Create stream\nI0131 14:25:59.644089    2141 log.go:172] (0xc0008d00b0) (0xc00076e140) Stream added, broadcasting: 3\nI0131 14:25:59.646390    2141 log.go:172] (0xc0008d00b0) Reply frame received for 3\nI0131 14:25:59.646504    2141 log.go:172] (0xc0008d00b0) (0xc000812000) Create stream\nI0131 14:25:59.646523    2141 log.go:172] (0xc0008d00b0) (0xc000812000) Stream added, broadcasting: 5\nI0131 14:25:59.649831    2141 log.go:172] (0xc0008d00b0) Reply frame received for 5\nI0131 14:25:59.777627    2141 log.go:172] (0xc0008d00b0) Data frame received for 5\nI0131 14:25:59.777683    2141 log.go:172] (0xc000812000) (5) Data frame handling\nI0131 14:25:59.777696    2141 log.go:172] (0xc000812000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0131 14:25:59.808993    2141 log.go:172] (0xc0008d00b0) Data frame received for 3\nI0131 14:25:59.809053    2141 log.go:172] (0xc00076e140) (3) Data frame handling\nI0131 14:25:59.809074    2141 log.go:172] (0xc00076e140) (3) Data frame sent\nI0131 14:26:00.031098    2141 log.go:172] (0xc0008d00b0) (0xc00076e140) Stream removed, broadcasting: 3\nI0131 14:26:00.031306    2141 log.go:172] (0xc0008d00b0) Data frame received for 1\nI0131 14:26:00.031340    2141 log.go:172] (0xc0006f45a0) (1) Data frame handling\nI0131 14:26:00.031368    2141 log.go:172] (0xc0006f45a0) (1) Data frame sent\nI0131 14:26:00.031720    2141 log.go:172] (0xc0008d00b0) (0xc0006f45a0) Stream removed, broadcasting: 1\nI0131 14:26:00.031786    2141 log.go:172] (0xc0008d00b0) (0xc000812000) Stream removed, broadcasting: 5\nI0131 14:26:00.031808    2141 log.go:172] (0xc0008d00b0) Go away received\nI0131 14:26:00.033224    2141 log.go:172] (0xc0008d00b0) (0xc0006f45a0) Stream removed, broadcasting: 1\nI0131 14:26:00.033247    2141 log.go:172] (0xc0008d00b0) (0xc00076e140) Stream removed, broadcasting: 3\nI0131 14:26:00.033259    2141 log.go:172] (0xc0008d00b0) (0xc000812000) Stream removed, broadcasting: 5\n"
Jan 31 14:26:00.047: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 14:26:00.047: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 14:26:00.054: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 14:26:00.054: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 14:26:00.117: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999651s
Jan 31 14:26:01.130: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.94922936s
Jan 31 14:26:02.142: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.937040193s
Jan 31 14:26:03.149: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.924690677s
Jan 31 14:26:04.158: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.917627512s
Jan 31 14:26:05.169: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.908357147s
Jan 31 14:26:06.179: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.897320137s
Jan 31 14:26:07.189: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.88739251s
Jan 31 14:26:08.201: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.877549171s
Jan 31 14:26:09.209: INFO: Verifying statefulset ss doesn't scale past 1 for another 865.906805ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4663
Jan 31 14:26:10.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:26:10.838: INFO: stderr: "I0131 14:26:10.470670    2158 log.go:172] (0xc000130dc0) (0xc0001f2820) Create stream\nI0131 14:26:10.471058    2158 log.go:172] (0xc000130dc0) (0xc0001f2820) Stream added, broadcasting: 1\nI0131 14:26:10.480192    2158 log.go:172] (0xc000130dc0) Reply frame received for 1\nI0131 14:26:10.480292    2158 log.go:172] (0xc000130dc0) (0xc0001f28c0) Create stream\nI0131 14:26:10.480304    2158 log.go:172] (0xc000130dc0) (0xc0001f28c0) Stream added, broadcasting: 3\nI0131 14:26:10.481845    2158 log.go:172] (0xc000130dc0) Reply frame received for 3\nI0131 14:26:10.481892    2158 log.go:172] (0xc000130dc0) (0xc0009b8000) Create stream\nI0131 14:26:10.481906    2158 log.go:172] (0xc000130dc0) (0xc0009b8000) Stream added, broadcasting: 5\nI0131 14:26:10.483446    2158 log.go:172] (0xc000130dc0) Reply frame received for 5\nI0131 14:26:10.643497    2158 log.go:172] (0xc000130dc0) Data frame received for 3\nI0131 14:26:10.643646    2158 log.go:172] (0xc0001f28c0) (3) Data frame handling\nI0131 14:26:10.643660    2158 log.go:172] (0xc0001f28c0) (3) Data frame sent\nI0131 14:26:10.643739    2158 log.go:172] (0xc000130dc0) Data frame received for 5\nI0131 14:26:10.643755    2158 log.go:172] (0xc0009b8000) (5) Data frame handling\nI0131 14:26:10.643773    2158 log.go:172] (0xc0009b8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0131 14:26:10.811306    2158 log.go:172] (0xc000130dc0) Data frame received for 1\nI0131 14:26:10.811477    2158 log.go:172] (0xc000130dc0) (0xc0001f28c0) Stream removed, broadcasting: 3\nI0131 14:26:10.811618    2158 log.go:172] (0xc000130dc0) (0xc0009b8000) Stream removed, broadcasting: 5\nI0131 14:26:10.811803    2158 log.go:172] (0xc0001f2820) (1) Data frame handling\nI0131 14:26:10.811892    2158 log.go:172] (0xc0001f2820) (1) Data frame sent\nI0131 14:26:10.811920    2158 log.go:172] (0xc000130dc0) (0xc0001f2820) Stream removed, broadcasting: 1\nI0131 14:26:10.811947    2158 log.go:172] (0xc000130dc0) Go away received\nI0131 14:26:10.816149    2158 log.go:172] (0xc000130dc0) (0xc0001f2820) Stream removed, broadcasting: 1\nI0131 14:26:10.816383    2158 log.go:172] (0xc000130dc0) (0xc0001f28c0) Stream removed, broadcasting: 3\nI0131 14:26:10.816439    2158 log.go:172] (0xc000130dc0) (0xc0009b8000) Stream removed, broadcasting: 5\n"
Jan 31 14:26:10.838: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 14:26:10.838: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 14:26:10.850: INFO: Found 1 stateful pods, waiting for 3
Jan 31 14:26:20.877: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 14:26:20.878: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 14:26:20.878: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 31 14:26:30.872: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 14:26:30.872: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 14:26:30.872: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 31 14:26:30.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 14:26:31.465: INFO: stderr: "I0131 14:26:31.133644    2179 log.go:172] (0xc0009944d0) (0xc000a368c0) Create stream\nI0131 14:26:31.134014    2179 log.go:172] (0xc0009944d0) (0xc000a368c0) Stream added, broadcasting: 1\nI0131 14:26:31.155584    2179 log.go:172] (0xc0009944d0) Reply frame received for 1\nI0131 14:26:31.155657    2179 log.go:172] (0xc0009944d0) (0xc000a36000) Create stream\nI0131 14:26:31.155672    2179 log.go:172] (0xc0009944d0) (0xc000a36000) Stream added, broadcasting: 3\nI0131 14:26:31.160425    2179 log.go:172] (0xc0009944d0) Reply frame received for 3\nI0131 14:26:31.160532    2179 log.go:172] (0xc0009944d0) (0xc000a360a0) Create stream\nI0131 14:26:31.160544    2179 log.go:172] (0xc0009944d0) (0xc000a360a0) Stream added, broadcasting: 5\nI0131 14:26:31.162850    2179 log.go:172] (0xc0009944d0) Reply frame received for 5\nI0131 14:26:31.274743    2179 log.go:172] (0xc0009944d0) Data frame received for 3\nI0131 14:26:31.274929    2179 log.go:172] (0xc000a36000) (3) Data frame handling\nI0131 14:26:31.274966    2179 log.go:172] (0xc000a36000) (3) Data frame sent\nI0131 14:26:31.275042    2179 log.go:172] (0xc0009944d0) Data frame received for 5\nI0131 14:26:31.275064    2179 log.go:172] (0xc000a360a0) (5) Data frame handling\nI0131 14:26:31.275092    2179 log.go:172] (0xc000a360a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0131 14:26:31.449767    2179 log.go:172] (0xc0009944d0) Data frame received for 1\nI0131 14:26:31.449968    2179 log.go:172] (0xc0009944d0) (0xc000a36000) Stream removed, broadcasting: 3\nI0131 14:26:31.450076    2179 log.go:172] (0xc000a368c0) (1) Data frame handling\nI0131 14:26:31.450119    2179 log.go:172] (0xc000a368c0) (1) Data frame sent\nI0131 14:26:31.450223    2179 log.go:172] (0xc0009944d0) (0xc000a360a0) Stream removed, broadcasting: 5\nI0131 14:26:31.450298    2179 log.go:172] (0xc0009944d0) (0xc000a368c0) Stream removed, broadcasting: 1\nI0131 14:26:31.450321    2179 log.go:172] (0xc0009944d0) Go away received\nI0131 14:26:31.451767    2179 log.go:172] (0xc0009944d0) (0xc000a368c0) Stream removed, broadcasting: 1\nI0131 14:26:31.451794    2179 log.go:172] (0xc0009944d0) (0xc000a36000) Stream removed, broadcasting: 3\nI0131 14:26:31.451806    2179 log.go:172] (0xc0009944d0) (0xc000a360a0) Stream removed, broadcasting: 5\n"
Jan 31 14:26:31.466: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 14:26:31.466: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 14:26:31.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 14:26:31.824: INFO: stderr: "I0131 14:26:31.598488    2201 log.go:172] (0xc0009c8370) (0xc0006588c0) Create stream\nI0131 14:26:31.598721    2201 log.go:172] (0xc0009c8370) (0xc0006588c0) Stream added, broadcasting: 1\nI0131 14:26:31.601089    2201 log.go:172] (0xc0009c8370) Reply frame received for 1\nI0131 14:26:31.601121    2201 log.go:172] (0xc0009c8370) (0xc000886000) Create stream\nI0131 14:26:31.601129    2201 log.go:172] (0xc0009c8370) (0xc000886000) Stream added, broadcasting: 3\nI0131 14:26:31.602266    2201 log.go:172] (0xc0009c8370) Reply frame received for 3\nI0131 14:26:31.602284    2201 log.go:172] (0xc0009c8370) (0xc00090c000) Create stream\nI0131 14:26:31.602293    2201 log.go:172] (0xc0009c8370) (0xc00090c000) Stream added, broadcasting: 5\nI0131 14:26:31.603318    2201 log.go:172] (0xc0009c8370) Reply frame received for 5\nI0131 14:26:31.674178    2201 log.go:172] (0xc0009c8370) Data frame received for 5\nI0131 14:26:31.674228    2201 log.go:172] (0xc00090c000) (5) Data frame handling\nI0131 14:26:31.674250    2201 log.go:172] (0xc00090c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0131 14:26:31.718811    2201 log.go:172] (0xc0009c8370) Data frame received for 3\nI0131 14:26:31.718852    2201 log.go:172] (0xc000886000) (3) Data frame handling\nI0131 14:26:31.718883    2201 log.go:172] (0xc000886000) (3) Data frame sent\nI0131 14:26:31.808136    2201 log.go:172] (0xc0009c8370) Data frame received for 1\nI0131 14:26:31.808732    2201 log.go:172] (0xc0006588c0) (1) Data frame handling\nI0131 14:26:31.808822    2201 log.go:172] (0xc0006588c0) (1) Data frame sent\nI0131 14:26:31.809540    2201 log.go:172] (0xc0009c8370) (0xc00090c000) Stream removed, broadcasting: 5\nI0131 14:26:31.809958    2201 log.go:172] (0xc0009c8370) (0xc000886000) Stream removed, broadcasting: 3\nI0131 14:26:31.810170    2201 log.go:172] (0xc0009c8370) (0xc0006588c0) Stream removed, broadcasting: 1\nI0131 14:26:31.810246    2201 log.go:172] (0xc0009c8370) Go away received\nI0131 14:26:31.811389    2201 log.go:172] (0xc0009c8370) (0xc0006588c0) Stream removed, broadcasting: 1\nI0131 14:26:31.811427    2201 log.go:172] (0xc0009c8370) (0xc000886000) Stream removed, broadcasting: 3\nI0131 14:26:31.811441    2201 log.go:172] (0xc0009c8370) (0xc00090c000) Stream removed, broadcasting: 5\n"
Jan 31 14:26:31.824: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 14:26:31.825: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 14:26:31.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 14:26:32.430: INFO: stderr: "I0131 14:26:32.091897    2222 log.go:172] (0xc00084c0b0) (0xc0009b0140) Create stream\nI0131 14:26:32.092086    2222 log.go:172] (0xc00084c0b0) (0xc0009b0140) Stream added, broadcasting: 1\nI0131 14:26:32.103535    2222 log.go:172] (0xc00084c0b0) Reply frame received for 1\nI0131 14:26:32.103644    2222 log.go:172] (0xc00084c0b0) (0xc000a44000) Create stream\nI0131 14:26:32.103675    2222 log.go:172] (0xc00084c0b0) (0xc000a44000) Stream added, broadcasting: 3\nI0131 14:26:32.107316    2222 log.go:172] (0xc00084c0b0) Reply frame received for 3\nI0131 14:26:32.107502    2222 log.go:172] (0xc00084c0b0) (0xc000a445a0) Create stream\nI0131 14:26:32.107529    2222 log.go:172] (0xc00084c0b0) (0xc000a445a0) Stream added, broadcasting: 5\nI0131 14:26:32.112049    2222 log.go:172] (0xc00084c0b0) Reply frame received for 5\nI0131 14:26:32.277538    2222 log.go:172] (0xc00084c0b0) Data frame received for 5\nI0131 14:26:32.277756    2222 log.go:172] (0xc000a445a0) (5) Data frame handling\nI0131 14:26:32.277810    2222 log.go:172] (0xc000a445a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0131 14:26:32.315759    2222 log.go:172] (0xc00084c0b0) Data frame received for 3\nI0131 14:26:32.315905    2222 log.go:172] (0xc000a44000) (3) Data frame handling\nI0131 14:26:32.315937    2222 log.go:172] (0xc000a44000) (3) Data frame sent\nI0131 14:26:32.416101    2222 log.go:172] (0xc00084c0b0) Data frame received for 1\nI0131 14:26:32.416257    2222 log.go:172] (0xc0009b0140) (1) Data frame handling\nI0131 14:26:32.416315    2222 log.go:172] (0xc0009b0140) (1) Data frame sent\nI0131 14:26:32.417197    2222 log.go:172] (0xc00084c0b0) (0xc0009b0140) Stream removed, broadcasting: 1\nI0131 14:26:32.417676    2222 log.go:172] (0xc00084c0b0) (0xc000a44000) Stream removed, broadcasting: 3\nI0131 14:26:32.418250    2222 log.go:172] (0xc00084c0b0) (0xc000a445a0) Stream removed, broadcasting: 5\nI0131 14:26:32.418380    2222 log.go:172] (0xc00084c0b0) Go away received\nI0131 14:26:32.418756    2222 log.go:172] (0xc00084c0b0) (0xc0009b0140) Stream removed, broadcasting: 1\nI0131 14:26:32.418804    2222 log.go:172] (0xc00084c0b0) (0xc000a44000) Stream removed, broadcasting: 3\nI0131 14:26:32.418826    2222 log.go:172] (0xc00084c0b0) (0xc000a445a0) Stream removed, broadcasting: 5\n"
Jan 31 14:26:32.430: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 14:26:32.430: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 14:26:32.430: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 14:26:32.438: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 31 14:26:42.455: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 14:26:42.455: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 14:26:42.455: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 14:26:42.487: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999682s
Jan 31 14:26:43.498: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.983048623s
Jan 31 14:26:44.515: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.971880717s
Jan 31 14:26:45.527: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.955086933s
Jan 31 14:26:46.549: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.94343088s
Jan 31 14:26:47.563: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.921647774s
Jan 31 14:26:48.598: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.907571759s
Jan 31 14:26:49.613: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.87229815s
Jan 31 14:26:50.631: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.857386265s
Jan 31 14:26:51.648: INFO: Verifying statefulset ss doesn't scale past 3 for another 839.044858ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4663
Jan 31 14:26:52.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:26:53.297: INFO: stderr: "I0131 14:26:52.974696    2242 log.go:172] (0xc0008f20b0) (0xc00090e5a0) Create stream\nI0131 14:26:52.975004    2242 log.go:172] (0xc0008f20b0) (0xc00090e5a0) Stream added, broadcasting: 1\nI0131 14:26:52.987246    2242 log.go:172] (0xc0008f20b0) Reply frame received for 1\nI0131 14:26:52.987370    2242 log.go:172] (0xc0008f20b0) (0xc00051e280) Create stream\nI0131 14:26:52.987380    2242 log.go:172] (0xc0008f20b0) (0xc00051e280) Stream added, broadcasting: 3\nI0131 14:26:52.990255    2242 log.go:172] (0xc0008f20b0) Reply frame received for 3\nI0131 14:26:52.990274    2242 log.go:172] (0xc0008f20b0) (0xc00051e320) Create stream\nI0131 14:26:52.990281    2242 log.go:172] (0xc0008f20b0) (0xc00051e320) Stream added, broadcasting: 5\nI0131 14:26:52.993434    2242 log.go:172] (0xc0008f20b0) Reply frame received for 5\nI0131 14:26:53.120423    2242 log.go:172] (0xc0008f20b0) Data frame received for 5\nI0131 14:26:53.120509    2242 log.go:172] (0xc00051e320) (5) Data frame handling\nI0131 14:26:53.120528    2242 log.go:172] (0xc00051e320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0131 14:26:53.120546    2242 log.go:172] (0xc0008f20b0) Data frame received for 3\nI0131 14:26:53.120553    2242 log.go:172] (0xc00051e280) (3) Data frame handling\nI0131 14:26:53.120567    2242 log.go:172] (0xc00051e280) (3) Data frame sent\nI0131 14:26:53.286147    2242 log.go:172] (0xc0008f20b0) (0xc00051e280) Stream removed, broadcasting: 3\nI0131 14:26:53.286369    2242 log.go:172] (0xc0008f20b0) Data frame received for 1\nI0131 14:26:53.286382    2242 log.go:172] (0xc00090e5a0) (1) Data frame handling\nI0131 14:26:53.286395    2242 log.go:172] (0xc00090e5a0) (1) Data frame sent\nI0131 14:26:53.286402    2242 log.go:172] (0xc0008f20b0) (0xc00090e5a0) Stream removed, broadcasting: 1\nI0131 14:26:53.286932    2242 log.go:172] (0xc0008f20b0) (0xc00051e320) Stream removed, broadcasting: 5\nI0131 14:26:53.286974    2242 log.go:172] (0xc0008f20b0) (0xc00090e5a0) Stream removed, broadcasting: 1\nI0131 14:26:53.286983    2242 log.go:172] (0xc0008f20b0) (0xc00051e280) Stream removed, broadcasting: 3\nI0131 14:26:53.286990    2242 log.go:172] (0xc0008f20b0) (0xc00051e320) Stream removed, broadcasting: 5\n"
Jan 31 14:26:53.298: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 14:26:53.298: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 14:26:53.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:26:53.657: INFO: stderr: "I0131 14:26:53.480791    2257 log.go:172] (0xc000116dc0) (0xc000a5a6e0) Create stream\nI0131 14:26:53.480948    2257 log.go:172] (0xc000116dc0) (0xc000a5a6e0) Stream added, broadcasting: 1\nI0131 14:26:53.483446    2257 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0131 14:26:53.483480    2257 log.go:172] (0xc000116dc0) (0xc00061a140) Create stream\nI0131 14:26:53.483490    2257 log.go:172] (0xc000116dc0) (0xc00061a140) Stream added, broadcasting: 3\nI0131 14:26:53.484272    2257 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0131 14:26:53.484300    2257 log.go:172] (0xc000116dc0) (0xc000854000) Create stream\nI0131 14:26:53.484308    2257 log.go:172] (0xc000116dc0) (0xc000854000) Stream added, broadcasting: 5\nI0131 14:26:53.485321    2257 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0131 14:26:53.555033    2257 log.go:172] (0xc000116dc0) Data frame received for 5\nI0131 14:26:53.555066    2257 log.go:172] (0xc000854000) (5) Data frame handling\nI0131 14:26:53.555083    2257 log.go:172] (0xc000854000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/I0131 14:26:53.555228    2257 log.go:172] (0xc000116dc0) Data frame received for 5\nI0131 14:26:53.555249    2257 log.go:172] (0xc000854000) (5) Data frame handling\nI0131 14:26:53.555262    2257 log.go:172] (0xc000854000) (5) Data frame sent\n\nI0131 14:26:53.555290    2257 log.go:172] (0xc000116dc0) Data frame received for 3\nI0131 14:26:53.555312    2257 log.go:172] (0xc00061a140) (3) Data frame handling\nI0131 14:26:53.555335    2257 log.go:172] (0xc00061a140) (3) Data frame sent\nI0131 14:26:53.640987    2257 log.go:172] (0xc000116dc0) (0xc00061a140) Stream removed, broadcasting: 3\nI0131 14:26:53.641348    2257 log.go:172] (0xc000116dc0) Data frame received for 1\nI0131 14:26:53.641388    2257 log.go:172] (0xc000a5a6e0) (1) Data frame handling\nI0131 14:26:53.641457    2257 log.go:172] (0xc000a5a6e0) (1) Data frame sent\nI0131 14:26:53.641512    2257 log.go:172] (0xc000116dc0) (0xc000a5a6e0) Stream removed, broadcasting: 1\nI0131 14:26:53.641684    2257 log.go:172] (0xc000116dc0) (0xc000854000) Stream removed, broadcasting: 5\nI0131 14:26:53.642110    2257 log.go:172] (0xc000116dc0) Go away received\nI0131 14:26:53.643186    2257 log.go:172] (0xc000116dc0) (0xc000a5a6e0) Stream removed, broadcasting: 1\nI0131 14:26:53.643221    2257 log.go:172] (0xc000116dc0) (0xc00061a140) Stream removed, broadcasting: 3\nI0131 14:26:53.643238    2257 log.go:172] (0xc000116dc0) (0xc000854000) Stream removed, broadcasting: 5\n"
Jan 31 14:26:53.657: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 14:26:53.657: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 14:26:53.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:26:54.634: INFO: rc: 137
Jan 31 14:26:54.635: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
 I0131 14:26:53.949368    2277 log.go:172] (0xc0009260b0) (0xc000804780) Create stream
I0131 14:26:53.949749    2277 log.go:172] (0xc0009260b0) (0xc000804780) Stream added, broadcasting: 1
I0131 14:26:53.961585    2277 log.go:172] (0xc0009260b0) Reply frame received for 1
I0131 14:26:53.961837    2277 log.go:172] (0xc0009260b0) (0xc0003b2280) Create stream
I0131 14:26:53.961875    2277 log.go:172] (0xc0009260b0) (0xc0003b2280) Stream added, broadcasting: 3
I0131 14:26:53.965735    2277 log.go:172] (0xc0009260b0) Reply frame received for 3
I0131 14:26:53.965858    2277 log.go:172] (0xc0009260b0) (0xc000300000) Create stream
I0131 14:26:53.965888    2277 log.go:172] (0xc0009260b0) (0xc000300000) Stream added, broadcasting: 5
I0131 14:26:53.974722    2277 log.go:172] (0xc0009260b0) Reply frame received for 5
I0131 14:26:54.384250    2277 log.go:172] (0xc0009260b0) Data frame received for 3
I0131 14:26:54.384411    2277 log.go:172] (0xc0003b2280) (3) Data frame handling
I0131 14:26:54.384440    2277 log.go:172] (0xc0003b2280) (3) Data frame sent
I0131 14:26:54.384513    2277 log.go:172] (0xc0009260b0) Data frame received for 5
I0131 14:26:54.384537    2277 log.go:172] (0xc000300000) (5) Data frame handling
I0131 14:26:54.384547    2277 log.go:172] (0xc000300000) (5) Data frame sent
+ mv -v /tmp/index.html /usr/share/nginx/html/
I0131 14:26:54.618254    2277 log.go:172] (0xc0009260b0) (0xc0003b2280) Stream removed, broadcasting: 3
I0131 14:26:54.618422    2277 log.go:172] (0xc0009260b0) Data frame received for 1
I0131 14:26:54.618455    2277 log.go:172] (0xc000804780) (1) Data frame handling
I0131 14:26:54.618472    2277 log.go:172] (0xc000804780) (1) Data frame sent
I0131 14:26:54.618484    2277 log.go:172] (0xc0009260b0) (0xc000804780) Stream removed, broadcasting: 1
I0131 14:26:54.618597    2277 log.go:172] (0xc0009260b0) (0xc000300000) Stream removed, broadcasting: 5
I0131 14:26:54.618690    2277 log.go:172] (0xc0009260b0) Go away received
I0131 14:26:54.619686    2277 log.go:172] (0xc0009260b0) (0xc000804780) Stream removed, broadcasting: 1
I0131 14:26:54.619705    2277 log.go:172] (0xc0009260b0) (0xc0003b2280) Stream removed, broadcasting: 3
I0131 14:26:54.619721    2277 log.go:172] (0xc0009260b0) (0xc000300000) Stream removed, broadcasting: 5
command terminated with exit code 137
 []  0xc002752240 exit status 137   true [0xc0014b8350 0xc0014b8368 0xc0014b8380] [0xc0014b8350 0xc0014b8368 0xc0014b8380] [0xc0014b8360 0xc0014b8378] [0xba6c50 0xba6c50] 0xc002ac3320 }:
Command stdout:
'/tmp/index.html' -> '/usr/share/nginx/html/index.html'

stderr:
I0131 14:26:53.949368    2277 log.go:172] (0xc0009260b0) (0xc000804780) Create stream
I0131 14:26:53.949749    2277 log.go:172] (0xc0009260b0) (0xc000804780) Stream added, broadcasting: 1
I0131 14:26:53.961585    2277 log.go:172] (0xc0009260b0) Reply frame received for 1
I0131 14:26:53.961837    2277 log.go:172] (0xc0009260b0) (0xc0003b2280) Create stream
I0131 14:26:53.961875    2277 log.go:172] (0xc0009260b0) (0xc0003b2280) Stream added, broadcasting: 3
I0131 14:26:53.965735    2277 log.go:172] (0xc0009260b0) Reply frame received for 3
I0131 14:26:53.965858    2277 log.go:172] (0xc0009260b0) (0xc000300000) Create stream
I0131 14:26:53.965888    2277 log.go:172] (0xc0009260b0) (0xc000300000) Stream added, broadcasting: 5
I0131 14:26:53.974722    2277 log.go:172] (0xc0009260b0) Reply frame received for 5
I0131 14:26:54.384250    2277 log.go:172] (0xc0009260b0) Data frame received for 3
I0131 14:26:54.384411    2277 log.go:172] (0xc0003b2280) (3) Data frame handling
I0131 14:26:54.384440    2277 log.go:172] (0xc0003b2280) (3) Data frame sent
I0131 14:26:54.384513    2277 log.go:172] (0xc0009260b0) Data frame received for 5
I0131 14:26:54.384537    2277 log.go:172] (0xc000300000) (5) Data frame handling
I0131 14:26:54.384547    2277 log.go:172] (0xc000300000) (5) Data frame sent
+ mv -v /tmp/index.html /usr/share/nginx/html/
I0131 14:26:54.618254    2277 log.go:172] (0xc0009260b0) (0xc0003b2280) Stream removed, broadcasting: 3
I0131 14:26:54.618422    2277 log.go:172] (0xc0009260b0) Data frame received for 1
I0131 14:26:54.618455    2277 log.go:172] (0xc000804780) (1) Data frame handling
I0131 14:26:54.618472    2277 log.go:172] (0xc000804780) (1) Data frame sent
I0131 14:26:54.618484    2277 log.go:172] (0xc0009260b0) (0xc000804780) Stream removed, broadcasting: 1
I0131 14:26:54.618597    2277 log.go:172] (0xc0009260b0) (0xc000300000) Stream removed, broadcasting: 5
I0131 14:26:54.618690    2277 log.go:172] (0xc0009260b0) Go away received
I0131 14:26:54.619686    2277 log.go:172] (0xc0009260b0) (0xc000804780) Stream removed, broadcasting: 1
I0131 14:26:54.619705    2277 log.go:172] (0xc0009260b0) (0xc0003b2280) Stream removed, broadcasting: 3
I0131 14:26:54.619721    2277 log.go:172] (0xc0009260b0) (0xc000300000) Stream removed, broadcasting: 5
command terminated with exit code 137

error:
exit status 137
Jan 31 14:27:04.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:27:04.919: INFO: rc: 1
Jan 31 14:27:04.919: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002f060c0 exit status 1   true [0xc000cda290 0xc000cda4d0 0xc000cda8e8] [0xc000cda290 0xc000cda4d0 0xc000cda8e8] [0xc000cda468 0xc000cda800] [0xba6c50 0xba6c50] 0xc002304f00 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Jan 31 14:27:14.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:27:15.120: INFO: rc: 1
Jan 31 14:27:15.121: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0030a2090 exit status 1   true [0xc002106000 0xc002106018 0xc002106030] [0xc002106000 0xc002106018 0xc002106030] [0xc002106010 0xc002106028] [0xba6c50 0xba6c50] 0xc002ec8720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:27:25.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:27:27.251: INFO: rc: 1
Jan 31 14:27:27.252: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0030a2180 exit status 1   true [0xc002106038 0xc002106050 0xc002106068] [0xc002106038 0xc002106050 0xc002106068] [0xc002106048 0xc002106060] [0xba6c50 0xba6c50] 0xc002ec9140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:27:37.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:27:37.437: INFO: rc: 1
Jan 31 14:27:37.438: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0030a2240 exit status 1   true [0xc002106070 0xc002106088 0xc0021060a0] [0xc002106070 0xc002106088 0xc0021060a0] [0xc002106080 0xc002106098] [0xba6c50 0xba6c50] 0xc002ec9980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:27:47.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:27:47.659: INFO: rc: 1
Jan 31 14:27:47.659: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0030a2300 exit status 1   true [0xc0021060a8 0xc0021060c0 0xc0021060d8] [0xc0021060a8 0xc0021060c0 0xc0021060d8] [0xc0021060b8 0xc0021060d0] [0xba6c50 0xba6c50] 0xc002ae3800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:27:57.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:27:57.883: INFO: rc: 1
Jan 31 14:27:57.883: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d340c0 exit status 1   true [0xc002928000 0xc002928030 0xc002928048] [0xc002928000 0xc002928030 0xc002928048] [0xc002928018 0xc002928040] [0xba6c50 0xba6c50] 0xc002560300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:28:07.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:28:08.061: INFO: rc: 1
Jan 31 14:28:08.062: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d34180 exit status 1   true [0xc002928050 0xc0029280a0 0xc0029280e8] [0xc002928050 0xc0029280a0 0xc0029280e8] [0xc002928088 0xc0029280c8] [0xba6c50 0xba6c50] 0xc002560600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:28:18.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:28:18.284: INFO: rc: 1
Jan 31 14:28:18.284: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f061e0 exit status 1   true [0xc000cda9a8 0xc000cdae30 0xc000cdb118] [0xc000cda9a8 0xc000cdae30 0xc000cdb118] [0xc000cdaa10 0xc000cdb0e8] [0xba6c50 0xba6c50] 0xc002305ec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:28:28.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:28:28.474: INFO: rc: 1
Jan 31 14:28:28.475: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d34270 exit status 1   true [0xc002928100 0xc002928118 0xc002928148] [0xc002928100 0xc002928118 0xc002928148] [0xc002928110 0xc002928130] [0xba6c50 0xba6c50] 0xc002560900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:28:38.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:28:38.617: INFO: rc: 1
Jan 31 14:28:38.618: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f062d0 exit status 1   true [0xc000cdb200 0xc000cdb3c8 0xc000cdb5e0] [0xc000cdb200 0xc000cdb3c8 0xc000cdb5e0] [0xc000cdb380 0xc000cdb540] [0xba6c50 0xba6c50] 0xc002a865a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:28:48.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:28:48.780: INFO: rc: 1
Jan 31 14:28:48.781: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f064b0 exit status 1   true [0xc000cdb670 0xc000cdb7e0 0xc000cdbc60] [0xc000cdb670 0xc000cdb7e0 0xc000cdbc60] [0xc000cdb728 0xc000cdbb40] [0xba6c50 0xba6c50] 0xc002a86900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:28:58.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:28:58.972: INFO: rc: 1
Jan 31 14:28:58.972: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bae150 exit status 1   true [0xc002758000 0xc002758018 0xc002758030] [0xc002758000 0xc002758018 0xc002758030] [0xc002758010 0xc002758028] [0xba6c50 0xba6c50] 0xc0025785a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:29:08.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:29:09.239: INFO: rc: 1
Jan 31 14:29:09.240: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0030a20c0 exit status 1   true [0xc000010010 0xc002928018 0xc002928040] [0xc000010010 0xc002928018 0xc002928040] [0xc002928008 0xc002928038] [0xba6c50 0xba6c50] 0xc002ec8600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:29:19.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:29:19.428: INFO: rc: 1
Jan 31 14:29:19.428: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0030a21e0 exit status 1   true [0xc002928048 0xc002928088 0xc0029280c8] [0xc002928048 0xc002928088 0xc0029280c8] [0xc002928068 0xc0029280b0] [0xba6c50 0xba6c50] 0xc002ec8ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:29:29.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:29:29.618: INFO: rc: 1
Jan 31 14:29:29.618: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d34120 exit status 1   true [0xc002106000 0xc002106018 0xc002106030] [0xc002106000 0xc002106018 0xc002106030] [0xc002106010 0xc002106028] [0xba6c50 0xba6c50] 0xc002304f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:29:39.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:29:39.819: INFO: rc: 1
Jan 31 14:29:39.820: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f06120 exit status 1   true [0xc000cda290 0xc000cda4d0 0xc000cda8e8] [0xc000cda290 0xc000cda4d0 0xc000cda8e8] [0xc000cda468 0xc000cda800] [0xba6c50 0xba6c50] 0xc002560300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:29:49.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:29:50.017: INFO: rc: 1
Jan 31 14:29:50.017: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d34210 exit status 1   true [0xc002106038 0xc002106050 0xc002106068] [0xc002106038 0xc002106050 0xc002106068] [0xc002106048 0xc002106060] [0xba6c50 0xba6c50] 0xc002305ec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:30:00.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:30:00.202: INFO: rc: 1
Jan 31 14:30:00.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0030a22d0 exit status 1   true [0xc0029280e8 0xc002928110 0xc002928130] [0xc0029280e8 0xc002928110 0xc002928130] [0xc002928108 0xc002928120] [0xba6c50 0xba6c50] 0xc002ec9620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:30:10.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:30:10.386: INFO: rc: 1
Jan 31 14:30:10.386: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0030a25a0 exit status 1   true [0xc002928148 0xc002928188 0xc0029281b8] [0xc002928148 0xc002928188 0xc0029281b8] [0xc002928168 0xc0029281a8] [0xba6c50 0xba6c50] 0xc002ec9f80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:30:20.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:30:20.590: INFO: rc: 1
Jan 31 14:30:20.590: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0030a2690 exit status 1   true [0xc0029281d0 0xc002928200 0xc002928248] [0xc0029281d0 0xc002928200 0xc002928248] [0xc0029281e0 0xc002928240] [0xba6c50 0xba6c50] 0xc002a865a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:30:30.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:30:30.751: INFO: rc: 1
Jan 31 14:30:30.752: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0030a2750 exit status 1   true [0xc002928250 0xc002928268 0xc002928280] [0xc002928250 0xc002928268 0xc002928280] [0xc002928260 0xc002928278] [0xba6c50 0xba6c50] 0xc002a868a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:30:40.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:30:40.928: INFO: rc: 1
Jan 31 14:30:40.928: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f06240 exit status 1   true [0xc000cda9a8 0xc000cdae30 0xc000cdb118] [0xc000cda9a8 0xc000cdae30 0xc000cdb118] [0xc000cdaa10 0xc000cdb0e8] [0xba6c50 0xba6c50] 0xc002560600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:30:50.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:30:51.086: INFO: rc: 1
Jan 31 14:30:51.087: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d343f0 exit status 1   true [0xc002106070 0xc002106088 0xc0021060a0] [0xc002106070 0xc002106088 0xc0021060a0] [0xc002106080 0xc002106098] [0xba6c50 0xba6c50] 0xc002ae3680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:31:01.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:31:01.247: INFO: rc: 1
Jan 31 14:31:01.247: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d34090 exit status 1   true [0xc000010048 0xc002106010 0xc002106028] [0xc000010048 0xc002106010 0xc002106028] [0xc002106008 0xc002106020] [0xba6c50 0xba6c50] 0xc002304f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:31:11.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:31:11.445: INFO: rc: 1
Jan 31 14:31:11.445: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f060f0 exit status 1   true [0xc000cda290 0xc000cda4d0 0xc000cda8e8] [0xc000cda290 0xc000cda4d0 0xc000cda8e8] [0xc000cda468 0xc000cda800] [0xba6c50 0xba6c50] 0xc002ec86c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:31:21.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:31:21.622: INFO: rc: 1
Jan 31 14:31:21.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0030a2090 exit status 1   true [0xc002928000 0xc002928030 0xc002928048] [0xc002928000 0xc002928030 0xc002928048] [0xc002928018 0xc002928040] [0xba6c50 0xba6c50] 0xc002ae3260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:31:31.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:31:31.972: INFO: rc: 1
Jan 31 14:31:31.973: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d341b0 exit status 1   true [0xc002106030 0xc002106048 0xc002106060] [0xc002106030 0xc002106048 0xc002106060] [0xc002106040 0xc002106058] [0xba6c50 0xba6c50] 0xc002305ec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:31:41.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:31:42.184: INFO: rc: 1
Jan 31 14:31:42.185: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002d342a0 exit status 1   true [0xc002106068 0xc002106080 0xc002106098] [0xc002106068 0xc002106080 0xc002106098] [0xc002106078 0xc002106090] [0xba6c50 0xba6c50] 0xc002560360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:31:52.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:31:52.439: INFO: rc: 1
Jan 31 14:31:52.440: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0030a2180 exit status 1   true [0xc002928050 0xc0029280a0 0xc0029280e8] [0xc002928050 0xc0029280a0 0xc0029280e8] [0xc002928088 0xc0029280c8] [0xba6c50 0xba6c50] 0xc002ae3980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 31 14:32:02.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4663 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:32:02.635: INFO: rc: 1
Jan 31 14:32:02.637: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Jan 31 14:32:02.637: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 31 14:32:02.712: INFO: Deleting all statefulset in ns statefulset-4663
Jan 31 14:32:02.717: INFO: Scaling statefulset ss to 0
Jan 31 14:32:02.740: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 14:32:02.743: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:32:02.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4663" for this suite.
Jan 31 14:32:08.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:32:09.001: INFO: namespace statefulset-4663 deletion completed in 6.193308109s

• [SLOW TEST:380.074 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:32:09.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 31 14:32:09.141: INFO: Waiting up to 5m0s for pod "pod-043a17bc-73f1-4aab-8af4-4bb11a9f1374" in namespace "emptydir-4760" to be "success or failure"
Jan 31 14:32:09.146: INFO: Pod "pod-043a17bc-73f1-4aab-8af4-4bb11a9f1374": Phase="Pending", Reason="", readiness=false. Elapsed: 5.190947ms
Jan 31 14:32:11.161: INFO: Pod "pod-043a17bc-73f1-4aab-8af4-4bb11a9f1374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020400977s
Jan 31 14:32:13.172: INFO: Pod "pod-043a17bc-73f1-4aab-8af4-4bb11a9f1374": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030798668s
Jan 31 14:32:15.178: INFO: Pod "pod-043a17bc-73f1-4aab-8af4-4bb11a9f1374": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036959607s
Jan 31 14:32:17.189: INFO: Pod "pod-043a17bc-73f1-4aab-8af4-4bb11a9f1374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048087732s
STEP: Saw pod success
Jan 31 14:32:17.189: INFO: Pod "pod-043a17bc-73f1-4aab-8af4-4bb11a9f1374" satisfied condition "success or failure"
Jan 31 14:32:17.195: INFO: Trying to get logs from node iruya-node pod pod-043a17bc-73f1-4aab-8af4-4bb11a9f1374 container test-container: 
STEP: delete the pod
Jan 31 14:32:17.457: INFO: Waiting for pod pod-043a17bc-73f1-4aab-8af4-4bb11a9f1374 to disappear
Jan 31 14:32:17.464: INFO: Pod pod-043a17bc-73f1-4aab-8af4-4bb11a9f1374 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:32:17.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4760" for this suite.
Jan 31 14:32:23.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:32:23.641: INFO: namespace emptydir-4760 deletion completed in 6.172360378s

• [SLOW TEST:14.640 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:32:23.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 31 14:32:23.824: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"45621311-fc60-4158-963f-32172cb32a61", Controller:(*bool)(0xc0028fefea), BlockOwnerDeletion:(*bool)(0xc0028fefeb)}}
Jan 31 14:32:23.888: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"fbe94075-15f3-40a9-8dc1-15efe081bf65", Controller:(*bool)(0xc0028ff18a), BlockOwnerDeletion:(*bool)(0xc0028ff18b)}}
Jan 31 14:32:23.947: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1df3f940-b81b-4613-a075-d75031a7135e", Controller:(*bool)(0xc000d7adfa), BlockOwnerDeletion:(*bool)(0xc000d7adfb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:32:28.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-913" for this suite.
Jan 31 14:32:35.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:32:35.227: INFO: namespace gc-913 deletion completed in 6.217005938s

• [SLOW TEST:11.586 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:32:35.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 31 14:32:35.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6333'
Jan 31 14:32:35.694: INFO: stderr: ""
Jan 31 14:32:35.694: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Jan 31 14:32:35.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6333'
Jan 31 14:32:40.735: INFO: stderr: ""
Jan 31 14:32:40.736: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:32:40.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6333" for this suite.
Jan 31 14:32:46.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:32:46.949: INFO: namespace kubectl-6333 deletion completed in 6.202845172s

• [SLOW TEST:11.721 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:32:46.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 31 14:32:47.057: INFO: Creating ReplicaSet my-hostname-basic-3efd1fe0-33d0-492d-9f95-aa9d28caee82
Jan 31 14:32:47.074: INFO: Pod name my-hostname-basic-3efd1fe0-33d0-492d-9f95-aa9d28caee82: Found 0 pods out of 1
Jan 31 14:32:52.082: INFO: Pod name my-hostname-basic-3efd1fe0-33d0-492d-9f95-aa9d28caee82: Found 1 pods out of 1
Jan 31 14:32:52.082: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-3efd1fe0-33d0-492d-9f95-aa9d28caee82" is running
Jan 31 14:32:56.095: INFO: Pod "my-hostname-basic-3efd1fe0-33d0-492d-9f95-aa9d28caee82-pxmlv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 14:32:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 14:32:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3efd1fe0-33d0-492d-9f95-aa9d28caee82]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 14:32:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3efd1fe0-33d0-492d-9f95-aa9d28caee82]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-31 14:32:47 +0000 UTC Reason: Message:}])
Jan 31 14:32:56.096: INFO: Trying to dial the pod
Jan 31 14:33:01.199: INFO: Controller my-hostname-basic-3efd1fe0-33d0-492d-9f95-aa9d28caee82: Got expected result from replica 1 [my-hostname-basic-3efd1fe0-33d0-492d-9f95-aa9d28caee82-pxmlv]: "my-hostname-basic-3efd1fe0-33d0-492d-9f95-aa9d28caee82-pxmlv", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:33:01.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5877" for this suite.
Jan 31 14:33:07.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:33:07.348: INFO: namespace replicaset-5877 deletion completed in 6.139144244s

• [SLOW TEST:20.398 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:33:07.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jan 31 14:33:07.447: INFO: Waiting up to 5m0s for pod "var-expansion-c2f63a0a-66ef-4072-8e1b-c71a7855b159" in namespace "var-expansion-5016" to be "success or failure"
Jan 31 14:33:07.460: INFO: Pod "var-expansion-c2f63a0a-66ef-4072-8e1b-c71a7855b159": Phase="Pending", Reason="", readiness=false. Elapsed: 12.716293ms
Jan 31 14:33:09.475: INFO: Pod "var-expansion-c2f63a0a-66ef-4072-8e1b-c71a7855b159": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028196921s
Jan 31 14:33:11.484: INFO: Pod "var-expansion-c2f63a0a-66ef-4072-8e1b-c71a7855b159": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036691818s
Jan 31 14:33:13.495: INFO: Pod "var-expansion-c2f63a0a-66ef-4072-8e1b-c71a7855b159": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048033476s
Jan 31 14:33:15.507: INFO: Pod "var-expansion-c2f63a0a-66ef-4072-8e1b-c71a7855b159": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059431421s
Jan 31 14:33:17.515: INFO: Pod "var-expansion-c2f63a0a-66ef-4072-8e1b-c71a7855b159": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067328436s
STEP: Saw pod success
Jan 31 14:33:17.515: INFO: Pod "var-expansion-c2f63a0a-66ef-4072-8e1b-c71a7855b159" satisfied condition "success or failure"
Jan 31 14:33:17.519: INFO: Trying to get logs from node iruya-node pod var-expansion-c2f63a0a-66ef-4072-8e1b-c71a7855b159 container dapi-container: 
STEP: delete the pod
Jan 31 14:33:17.561: INFO: Waiting for pod var-expansion-c2f63a0a-66ef-4072-8e1b-c71a7855b159 to disappear
Jan 31 14:33:17.571: INFO: Pod var-expansion-c2f63a0a-66ef-4072-8e1b-c71a7855b159 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:33:17.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5016" for this suite.
Jan 31 14:33:23.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:33:23.880: INFO: namespace var-expansion-5016 deletion completed in 6.30383365s

• [SLOW TEST:16.531 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:33:23.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-22c02ca8-666b-40d0-b843-69d32fd4c345
STEP: Creating a pod to test consume configMaps
Jan 31 14:33:24.011: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-885973a8-396c-47b3-a7fe-132417d919cf" in namespace "projected-470" to be "success or failure"
Jan 31 14:33:24.026: INFO: Pod "pod-projected-configmaps-885973a8-396c-47b3-a7fe-132417d919cf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.020751ms
Jan 31 14:33:26.034: INFO: Pod "pod-projected-configmaps-885973a8-396c-47b3-a7fe-132417d919cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022145262s
Jan 31 14:33:28.049: INFO: Pod "pod-projected-configmaps-885973a8-396c-47b3-a7fe-132417d919cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037335076s
Jan 31 14:33:30.069: INFO: Pod "pod-projected-configmaps-885973a8-396c-47b3-a7fe-132417d919cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056864599s
Jan 31 14:33:32.085: INFO: Pod "pod-projected-configmaps-885973a8-396c-47b3-a7fe-132417d919cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073485592s
STEP: Saw pod success
Jan 31 14:33:32.086: INFO: Pod "pod-projected-configmaps-885973a8-396c-47b3-a7fe-132417d919cf" satisfied condition "success or failure"
Jan 31 14:33:32.095: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-885973a8-396c-47b3-a7fe-132417d919cf container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 14:33:32.268: INFO: Waiting for pod pod-projected-configmaps-885973a8-396c-47b3-a7fe-132417d919cf to disappear
Jan 31 14:33:32.282: INFO: Pod pod-projected-configmaps-885973a8-396c-47b3-a7fe-132417d919cf no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:33:32.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-470" for this suite.
Jan 31 14:33:38.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:33:38.656: INFO: namespace projected-470 deletion completed in 6.337733867s

• [SLOW TEST:14.776 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:33:38.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5972
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-5972
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5972
Jan 31 14:33:38.809: INFO: Found 0 stateful pods, waiting for 1
Jan 31 14:33:48.819: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 31 14:33:48.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5972 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 14:33:49.421: INFO: stderr: "I0131 14:33:49.079345    2933 log.go:172] (0xc00013ee70) (0xc00054c640) Create stream\nI0131 14:33:49.079637    2933 log.go:172] (0xc00013ee70) (0xc00054c640) Stream added, broadcasting: 1\nI0131 14:33:49.092863    2933 log.go:172] (0xc00013ee70) Reply frame received for 1\nI0131 14:33:49.093104    2933 log.go:172] (0xc00013ee70) (0xc000936000) Create stream\nI0131 14:33:49.093124    2933 log.go:172] (0xc00013ee70) (0xc000936000) Stream added, broadcasting: 3\nI0131 14:33:49.099536    2933 log.go:172] (0xc00013ee70) Reply frame received for 3\nI0131 14:33:49.099617    2933 log.go:172] (0xc00013ee70) (0xc00054c6e0) Create stream\nI0131 14:33:49.099640    2933 log.go:172] (0xc00013ee70) (0xc00054c6e0) Stream added, broadcasting: 5\nI0131 14:33:49.102860    2933 log.go:172] (0xc00013ee70) Reply frame received for 5\nI0131 14:33:49.227233    2933 log.go:172] (0xc00013ee70) Data frame received for 5\nI0131 14:33:49.227342    2933 log.go:172] (0xc00054c6e0) (5) Data frame handling\nI0131 14:33:49.227385    2933 log.go:172] (0xc00054c6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0131 14:33:49.266130    2933 log.go:172] (0xc00013ee70) Data frame received for 3\nI0131 14:33:49.266208    2933 log.go:172] (0xc000936000) (3) Data frame handling\nI0131 14:33:49.266232    2933 log.go:172] (0xc000936000) (3) Data frame sent\nI0131 14:33:49.406224    2933 log.go:172] (0xc00013ee70) Data frame received for 1\nI0131 14:33:49.406413    2933 log.go:172] (0xc00013ee70) (0xc000936000) Stream removed, broadcasting: 3\nI0131 14:33:49.406496    2933 log.go:172] (0xc00054c640) (1) Data frame handling\nI0131 14:33:49.406527    2933 log.go:172] (0xc00054c640) (1) Data frame sent\nI0131 14:33:49.406576    2933 log.go:172] (0xc00013ee70) (0xc00054c6e0) Stream removed, broadcasting: 5\nI0131 14:33:49.406672    2933 log.go:172] (0xc00013ee70) (0xc00054c640) Stream removed, broadcasting: 1\nI0131 14:33:49.406733    2933 log.go:172] (0xc00013ee70) Go away received\nI0131 14:33:49.407931    2933 log.go:172] (0xc00013ee70) (0xc00054c640) Stream removed, broadcasting: 1\nI0131 14:33:49.407955    2933 log.go:172] (0xc00013ee70) (0xc000936000) Stream removed, broadcasting: 3\nI0131 14:33:49.407972    2933 log.go:172] (0xc00013ee70) (0xc00054c6e0) Stream removed, broadcasting: 5\n"
Jan 31 14:33:49.421: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 14:33:49.421: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 14:33:49.429: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 31 14:33:59.442: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 14:33:59.442: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 14:33:59.552: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 31 14:33:59.552: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  }]
Jan 31 14:33:59.552: INFO: ss-1              Pending         []
Jan 31 14:33:59.552: INFO: 
Jan 31 14:33:59.552: INFO: StatefulSet ss has not reached scale 3, at 2
Jan 31 14:34:00.620: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.919709756s
Jan 31 14:34:01.860: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.851658414s
Jan 31 14:34:03.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.611549709s
Jan 31 14:34:04.377: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.207931185s
Jan 31 14:34:05.388: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.094541078s
Jan 31 14:34:07.055: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.083532889s
Jan 31 14:34:08.423: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.416127793s
Jan 31 14:34:09.566: INFO: Verifying statefulset ss doesn't scale past 3 for another 48.295497ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5972
Jan 31 14:34:10.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5972 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:34:11.273: INFO: stderr: "I0131 14:34:10.884259    2952 log.go:172] (0xc00013a0b0) (0xc000644960) Create stream\nI0131 14:34:10.884626    2952 log.go:172] (0xc00013a0b0) (0xc000644960) Stream added, broadcasting: 1\nI0131 14:34:10.949891    2952 log.go:172] (0xc00013a0b0) Reply frame received for 1\nI0131 14:34:10.950461    2952 log.go:172] (0xc00013a0b0) (0xc000766000) Create stream\nI0131 14:34:10.950507    2952 log.go:172] (0xc00013a0b0) (0xc000766000) Stream added, broadcasting: 3\nI0131 14:34:10.952935    2952 log.go:172] (0xc00013a0b0) Reply frame received for 3\nI0131 14:34:10.952972    2952 log.go:172] (0xc00013a0b0) (0xc0007660a0) Create stream\nI0131 14:34:10.952980    2952 log.go:172] (0xc00013a0b0) (0xc0007660a0) Stream added, broadcasting: 5\nI0131 14:34:10.957372    2952 log.go:172] (0xc00013a0b0) Reply frame received for 5\nI0131 14:34:11.127784    2952 log.go:172] (0xc00013a0b0) Data frame received for 3\nI0131 14:34:11.127902    2952 log.go:172] (0xc000766000) (3) Data frame handling\nI0131 14:34:11.127924    2952 log.go:172] (0xc000766000) (3) Data frame sent\nI0131 14:34:11.127973    2952 log.go:172] (0xc00013a0b0) Data frame received for 5\nI0131 14:34:11.127994    2952 log.go:172] (0xc0007660a0) (5) Data frame handling\nI0131 14:34:11.128011    2952 log.go:172] (0xc0007660a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0131 14:34:11.259873    2952 log.go:172] (0xc00013a0b0) Data frame received for 1\nI0131 14:34:11.260062    2952 log.go:172] (0xc00013a0b0) (0xc000766000) Stream removed, broadcasting: 3\nI0131 14:34:11.260143    2952 log.go:172] (0xc000644960) (1) Data frame handling\nI0131 14:34:11.260177    2952 log.go:172] (0xc000644960) (1) Data frame sent\nI0131 14:34:11.260195    2952 log.go:172] (0xc00013a0b0) (0xc0007660a0) Stream removed, broadcasting: 5\nI0131 14:34:11.260228    2952 log.go:172] (0xc00013a0b0) (0xc000644960) Stream removed, broadcasting: 1\nI0131 14:34:11.260249    2952 log.go:172] (0xc00013a0b0) Go away received\nI0131 14:34:11.261537    2952 log.go:172] (0xc00013a0b0) (0xc000644960) Stream removed, broadcasting: 1\nI0131 14:34:11.261550    2952 log.go:172] (0xc00013a0b0) (0xc000766000) Stream removed, broadcasting: 3\nI0131 14:34:11.261555    2952 log.go:172] (0xc00013a0b0) (0xc0007660a0) Stream removed, broadcasting: 5\n"
Jan 31 14:34:11.273: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 14:34:11.274: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 14:34:11.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5972 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:34:11.620: INFO: stderr: "I0131 14:34:11.433234    2965 log.go:172] (0xc0008642c0) (0xc0009706e0) Create stream\nI0131 14:34:11.433443    2965 log.go:172] (0xc0008642c0) (0xc0009706e0) Stream added, broadcasting: 1\nI0131 14:34:11.437636    2965 log.go:172] (0xc0008642c0) Reply frame received for 1\nI0131 14:34:11.437673    2965 log.go:172] (0xc0008642c0) (0xc0005ee280) Create stream\nI0131 14:34:11.437681    2965 log.go:172] (0xc0008642c0) (0xc0005ee280) Stream added, broadcasting: 3\nI0131 14:34:11.438809    2965 log.go:172] (0xc0008642c0) Reply frame received for 3\nI0131 14:34:11.438827    2965 log.go:172] (0xc0008642c0) (0xc000970780) Create stream\nI0131 14:34:11.438833    2965 log.go:172] (0xc0008642c0) (0xc000970780) Stream added, broadcasting: 5\nI0131 14:34:11.440363    2965 log.go:172] (0xc0008642c0) Reply frame received for 5\nI0131 14:34:11.522858    2965 log.go:172] (0xc0008642c0) Data frame received for 5\nI0131 14:34:11.523006    2965 log.go:172] (0xc000970780) (5) Data frame handling\nI0131 14:34:11.523030    2965 log.go:172] (0xc000970780) (5) Data frame sent\nI0131 14:34:11.523048    2965 log.go:172] (0xc0008642c0) Data frame received for 5\nI0131 14:34:11.523066    2965 log.go:172] (0xc000970780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0131 14:34:11.523110    2965 log.go:172] (0xc0008642c0) Data frame received for 3\nI0131 14:34:11.523140    2965 log.go:172] (0xc0005ee280) (3) Data frame handling\nI0131 14:34:11.523161    2965 log.go:172] (0xc0005ee280) (3) Data frame sent\nI0131 14:34:11.523177    2965 log.go:172] (0xc000970780) (5) Data frame sent\nI0131 14:34:11.609749    2965 log.go:172] (0xc0008642c0) Data frame received for 1\nI0131 14:34:11.609801    2965 log.go:172] (0xc0009706e0) (1) Data frame handling\nI0131 14:34:11.609822    2965 log.go:172] (0xc0009706e0) (1) Data frame sent\nI0131 14:34:11.609844    2965 log.go:172] (0xc0008642c0) (0xc0005ee280) Stream removed, broadcasting: 3\nI0131 14:34:11.609904    2965 log.go:172] (0xc0008642c0) (0xc000970780) Stream removed, broadcasting: 5\nI0131 14:34:11.609933    2965 log.go:172] (0xc0008642c0) (0xc0009706e0) Stream removed, broadcasting: 1\nI0131 14:34:11.610433    2965 log.go:172] (0xc0008642c0) Go away received\nI0131 14:34:11.611110    2965 log.go:172] (0xc0008642c0) (0xc0009706e0) Stream removed, broadcasting: 1\nI0131 14:34:11.611127    2965 log.go:172] (0xc0008642c0) (0xc0005ee280) Stream removed, broadcasting: 3\nI0131 14:34:11.611137    2965 log.go:172] (0xc0008642c0) (0xc000970780) Stream removed, broadcasting: 5\n"
Jan 31 14:34:11.621: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 14:34:11.621: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 14:34:11.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5972 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 31 14:34:12.168: INFO: stderr: "I0131 14:34:11.810414    2986 log.go:172] (0xc000ae6580) (0xc0005a6aa0) Create stream\nI0131 14:34:11.810693    2986 log.go:172] (0xc000ae6580) (0xc0005a6aa0) Stream added, broadcasting: 1\nI0131 14:34:11.817765    2986 log.go:172] (0xc000ae6580) Reply frame received for 1\nI0131 14:34:11.817844    2986 log.go:172] (0xc000ae6580) (0xc00096c000) Create stream\nI0131 14:34:11.817859    2986 log.go:172] (0xc000ae6580) (0xc00096c000) Stream added, broadcasting: 3\nI0131 14:34:11.819249    2986 log.go:172] (0xc000ae6580) Reply frame received for 3\nI0131 14:34:11.819285    2986 log.go:172] (0xc000ae6580) (0xc0009a2000) Create stream\nI0131 14:34:11.819324    2986 log.go:172] (0xc000ae6580) (0xc0009a2000) Stream added, broadcasting: 5\nI0131 14:34:11.824125    2986 log.go:172] (0xc000ae6580) Reply frame received for 5\nI0131 14:34:11.963324    2986 log.go:172] (0xc000ae6580) Data frame received for 3\nI0131 14:34:11.963573    2986 log.go:172] (0xc00096c000) (3) Data frame handling\nI0131 14:34:11.963650    2986 log.go:172] (0xc00096c000) (3) Data frame sent\nI0131 14:34:11.964020    2986 log.go:172] (0xc000ae6580) Data frame received for 5\nI0131 14:34:11.964362    2986 log.go:172] (0xc0009a2000) (5) Data frame handling\nI0131 14:34:11.964443    2986 log.go:172] (0xc0009a2000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0131 14:34:12.149755    2986 log.go:172] (0xc000ae6580) Data frame received for 1\nI0131 14:34:12.149934    2986 log.go:172] (0xc000ae6580) (0xc0009a2000) Stream removed, broadcasting: 5\nI0131 14:34:12.149971    2986 log.go:172] (0xc0005a6aa0) (1) Data frame handling\nI0131 14:34:12.149993    2986 log.go:172] (0xc0005a6aa0) (1) Data frame sent\nI0131 14:34:12.150031    2986 log.go:172] (0xc000ae6580) (0xc00096c000) Stream removed, broadcasting: 3\nI0131 14:34:12.150106    2986 log.go:172] (0xc000ae6580) (0xc0005a6aa0) Stream removed, broadcasting: 1\nI0131 14:34:12.150157    2986 log.go:172] (0xc000ae6580) Go away received\nI0131 14:34:12.153195    2986 log.go:172] (0xc000ae6580) (0xc0005a6aa0) Stream removed, broadcasting: 1\nI0131 14:34:12.153221    2986 log.go:172] (0xc000ae6580) (0xc00096c000) Stream removed, broadcasting: 3\nI0131 14:34:12.153251    2986 log.go:172] (0xc000ae6580) (0xc0009a2000) Stream removed, broadcasting: 5\n"
Jan 31 14:34:12.168: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 31 14:34:12.168: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 31 14:34:12.182: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 14:34:12.182: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 31 14:34:12.182: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 31 14:34:12.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5972 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 14:34:12.702: INFO: stderr: "I0131 14:34:12.391854    3007 log.go:172] (0xc000a44420) (0xc0009866e0) Create stream\nI0131 14:34:12.392029    3007 log.go:172] (0xc000a44420) (0xc0009866e0) Stream added, broadcasting: 1\nI0131 14:34:12.399039    3007 log.go:172] (0xc000a44420) Reply frame received for 1\nI0131 14:34:12.399079    3007 log.go:172] (0xc000a44420) (0xc00061c280) Create stream\nI0131 14:34:12.399092    3007 log.go:172] (0xc000a44420) (0xc00061c280) Stream added, broadcasting: 3\nI0131 14:34:12.400339    3007 log.go:172] (0xc000a44420) Reply frame received for 3\nI0131 14:34:12.400459    3007 log.go:172] (0xc000a44420) (0xc000652000) Create stream\nI0131 14:34:12.400475    3007 log.go:172] (0xc000a44420) (0xc000652000) Stream added, broadcasting: 5\nI0131 14:34:12.401667    3007 log.go:172] (0xc000a44420) Reply frame received for 5\nI0131 14:34:12.548648    3007 log.go:172] (0xc000a44420) Data frame received for 5\nI0131 14:34:12.548899    3007 log.go:172] (0xc000652000) (5) Data frame handling\nI0131 14:34:12.548956    3007 log.go:172] (0xc000652000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0131 14:34:12.549492    3007 log.go:172] (0xc000a44420) Data frame received for 3\nI0131 14:34:12.549566    3007 log.go:172] (0xc00061c280) (3) Data frame handling\nI0131 14:34:12.549618    3007 log.go:172] (0xc00061c280) (3) Data frame sent\nI0131 14:34:12.691017    3007 log.go:172] (0xc000a44420) (0xc00061c280) Stream removed, broadcasting: 3\nI0131 14:34:12.691172    3007 log.go:172] (0xc000a44420) Data frame received for 1\nI0131 14:34:12.691205    3007 log.go:172] (0xc000a44420) (0xc000652000) Stream removed, broadcasting: 5\nI0131 14:34:12.691455    3007 log.go:172] (0xc0009866e0) (1) Data frame handling\nI0131 14:34:12.691524    3007 log.go:172] (0xc0009866e0) (1) Data frame sent\nI0131 14:34:12.691556    3007 log.go:172] (0xc000a44420) (0xc0009866e0) Stream removed, broadcasting: 1\nI0131 14:34:12.691591    3007 log.go:172] (0xc000a44420) Go away received\nI0131 14:34:12.692894    3007 log.go:172] (0xc000a44420) (0xc0009866e0) Stream removed, broadcasting: 1\nI0131 14:34:12.692925    3007 log.go:172] (0xc000a44420) (0xc00061c280) Stream removed, broadcasting: 3\nI0131 14:34:12.692941    3007 log.go:172] (0xc000a44420) (0xc000652000) Stream removed, broadcasting: 5\n"
Jan 31 14:34:12.703: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 14:34:12.703: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 14:34:12.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5972 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 14:34:13.102: INFO: stderr: "I0131 14:34:12.877149    3028 log.go:172] (0xc000976370) (0xc000880640) Create stream\nI0131 14:34:12.877326    3028 log.go:172] (0xc000976370) (0xc000880640) Stream added, broadcasting: 1\nI0131 14:34:12.880400    3028 log.go:172] (0xc000976370) Reply frame received for 1\nI0131 14:34:12.880432    3028 log.go:172] (0xc000976370) (0xc00087c000) Create stream\nI0131 14:34:12.880439    3028 log.go:172] (0xc000976370) (0xc00087c000) Stream added, broadcasting: 3\nI0131 14:34:12.881217    3028 log.go:172] (0xc000976370) Reply frame received for 3\nI0131 14:34:12.881232    3028 log.go:172] (0xc000976370) (0xc0008806e0) Create stream\nI0131 14:34:12.881236    3028 log.go:172] (0xc000976370) (0xc0008806e0) Stream added, broadcasting: 5\nI0131 14:34:12.883101    3028 log.go:172] (0xc000976370) Reply frame received for 5\nI0131 14:34:13.016325    3028 log.go:172] (0xc000976370) Data frame received for 5\nI0131 14:34:13.016387    3028 log.go:172] (0xc0008806e0) (5) Data frame handling\nI0131 14:34:13.016405    3028 log.go:172] (0xc0008806e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0131 14:34:13.036205    3028 log.go:172] (0xc000976370) Data frame received for 3\nI0131 14:34:13.036217    3028 log.go:172] (0xc00087c000) (3) Data frame handling\nI0131 14:34:13.036224    3028 log.go:172] (0xc00087c000) (3) Data frame sent\nI0131 14:34:13.093581    3028 log.go:172] (0xc000976370) Data frame received for 1\nI0131 14:34:13.093601    3028 log.go:172] (0xc000880640) (1) Data frame handling\nI0131 14:34:13.093612    3028 log.go:172] (0xc000880640) (1) Data frame sent\nI0131 14:34:13.093627    3028 log.go:172] (0xc000976370) (0xc000880640) Stream removed, broadcasting: 1\nI0131 14:34:13.093664    3028 log.go:172] (0xc000976370) (0xc0008806e0) Stream removed, broadcasting: 5\nI0131 14:34:13.093706    3028 log.go:172] (0xc000976370) (0xc00087c000) Stream removed, broadcasting: 3\nI0131 14:34:13.093736    3028 log.go:172] (0xc000976370) Go away received\nI0131 14:34:13.094178    3028 log.go:172] (0xc000976370) (0xc000880640) Stream removed, broadcasting: 1\nI0131 14:34:13.094189    3028 log.go:172] (0xc000976370) (0xc00087c000) Stream removed, broadcasting: 3\nI0131 14:34:13.094193    3028 log.go:172] (0xc000976370) (0xc0008806e0) Stream removed, broadcasting: 5\n"
Jan 31 14:34:13.103: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 14:34:13.103: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 14:34:13.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5972 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 31 14:34:13.638: INFO: stderr: "I0131 14:34:13.262976    3047 log.go:172] (0xc00072e580) (0xc0005f6a00) Create stream\nI0131 14:34:13.263141    3047 log.go:172] (0xc00072e580) (0xc0005f6a00) Stream added, broadcasting: 1\nI0131 14:34:13.271276    3047 log.go:172] (0xc00072e580) Reply frame received for 1\nI0131 14:34:13.271331    3047 log.go:172] (0xc00072e580) (0xc0006ca000) Create stream\nI0131 14:34:13.271352    3047 log.go:172] (0xc00072e580) (0xc0006ca000) Stream added, broadcasting: 3\nI0131 14:34:13.273036    3047 log.go:172] (0xc00072e580) Reply frame received for 3\nI0131 14:34:13.273144    3047 log.go:172] (0xc00072e580) (0xc0005f6aa0) Create stream\nI0131 14:34:13.273153    3047 log.go:172] (0xc00072e580) (0xc0005f6aa0) Stream added, broadcasting: 5\nI0131 14:34:13.276294    3047 log.go:172] (0xc00072e580) Reply frame received for 5\nI0131 14:34:13.404310    3047 log.go:172] (0xc00072e580) Data frame received for 5\nI0131 14:34:13.404433    3047 log.go:172] (0xc0005f6aa0) (5) Data frame handling\nI0131 14:34:13.404454    3047 log.go:172] (0xc0005f6aa0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0131 14:34:13.455405    3047 log.go:172] (0xc00072e580) Data frame received for 3\nI0131 14:34:13.455511    3047 log.go:172] (0xc0006ca000) (3) Data frame handling\nI0131 14:34:13.455541    3047 log.go:172] (0xc0006ca000) (3) Data frame sent\nI0131 14:34:13.615282    3047 log.go:172] (0xc00072e580) Data frame received for 1\nI0131 14:34:13.616132    3047 log.go:172] (0xc00072e580) (0xc0006ca000) Stream removed, broadcasting: 3\nI0131 14:34:13.616243    3047 log.go:172] (0xc0005f6a00) (1) Data frame handling\nI0131 14:34:13.616283    3047 log.go:172] (0xc0005f6a00) (1) Data frame sent\nI0131 14:34:13.616474    3047 log.go:172] (0xc00072e580) (0xc0005f6aa0) Stream removed, broadcasting: 5\nI0131 14:34:13.616745    3047 log.go:172] (0xc00072e580) (0xc0005f6a00) Stream removed, broadcasting: 1\nI0131 14:34:13.616819    3047 log.go:172] (0xc00072e580) Go away received\nI0131 14:34:13.619082    3047 log.go:172] (0xc00072e580) (0xc0005f6a00) Stream removed, broadcasting: 1\nI0131 14:34:13.619121    3047 log.go:172] (0xc00072e580) (0xc0006ca000) Stream removed, broadcasting: 3\nI0131 14:34:13.619133    3047 log.go:172] (0xc00072e580) (0xc0005f6aa0) Stream removed, broadcasting: 5\n"
Jan 31 14:34:13.639: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 31 14:34:13.639: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 31 14:34:13.639: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 14:34:13.652: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 31 14:34:23.667: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 14:34:23.667: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 14:34:23.667: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 31 14:34:23.701: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 14:34:23.701: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  }]
Jan 31 14:34:23.702: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  }]
Jan 31 14:34:23.702: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  }]
Jan 31 14:34:23.702: INFO: 
Jan 31 14:34:23.702: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 14:34:25.414: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 14:34:25.415: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  }]
Jan 31 14:34:25.415: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  }]
Jan 31 14:34:25.415: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  }]
Jan 31 14:34:25.415: INFO: 
Jan 31 14:34:25.415: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 14:34:26.525: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 14:34:26.526: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  }]
Jan 31 14:34:26.526: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  }]
Jan 31 14:34:26.526: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  }]
Jan 31 14:34:26.526: INFO: 
Jan 31 14:34:26.526: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 14:34:27.543: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 14:34:27.543: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  }]
Jan 31 14:34:27.543: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  }]
Jan 31 14:34:27.543: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  }]
Jan 31 14:34:27.543: INFO: 
Jan 31 14:34:27.543: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 14:34:28.570: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 14:34:28.570: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  }]
Jan 31 14:34:28.571: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  }]
Jan 31 14:34:28.571: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  }]
Jan 31 14:34:28.571: INFO: 
Jan 31 14:34:28.571: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 14:34:29.581: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 31 14:34:29.581: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  }]
Jan 31 14:34:29.581: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  }]
Jan 31 14:34:29.581: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  }]
Jan 31 14:34:29.581: INFO: 
Jan 31 14:34:29.581: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 31 14:34:30.597: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 31 14:34:30.597: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  }]
Jan 31 14:34:30.597: INFO: ss-2  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  }]
Jan 31 14:34:30.597: INFO: 
Jan 31 14:34:30.597: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 31 14:34:31.607: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 31 14:34:31.607: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  }]
Jan 31 14:34:31.607: INFO: ss-2  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  }]
Jan 31 14:34:31.607: INFO: 
Jan 31 14:34:31.607: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 31 14:34:32.624: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 31 14:34:32.624: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:38 +0000 UTC  }]
Jan 31 14:34:32.625: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:34:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:33:59 +0000 UTC  }]
Jan 31 14:34:32.625: INFO: 
Jan 31 14:34:32.625: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 31 14:34:33.643: INFO: Verifying statefulset ss doesn't scale past 0 for another 53.596608ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5972
Jan 31 14:34:34.662: INFO: Scaling statefulset ss to 0
Jan 31 14:34:34.683: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 31 14:34:34.686: INFO: Deleting all statefulset in ns statefulset-5972
Jan 31 14:34:34.689: INFO: Scaling statefulset ss to 0
Jan 31 14:34:34.703: INFO: Waiting for statefulset status.replicas updated to 0
Jan 31 14:34:34.706: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:34:34.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5972" for this suite.
Jan 31 14:34:40.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:34:40.947: INFO: namespace statefulset-5972 deletion completed in 6.208711155s

• [SLOW TEST:62.289 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:34:40.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 31 14:35:01.185: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6082 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 14:35:01.185: INFO: >>> kubeConfig: /root/.kube/config
I0131 14:35:01.272426       9 log.go:172] (0xc000313810) (0xc0013bac80) Create stream
I0131 14:35:01.272799       9 log.go:172] (0xc000313810) (0xc0013bac80) Stream added, broadcasting: 1
I0131 14:35:01.285139       9 log.go:172] (0xc000313810) Reply frame received for 1
I0131 14:35:01.285195       9 log.go:172] (0xc000313810) (0xc0013bafa0) Create stream
I0131 14:35:01.285212       9 log.go:172] (0xc000313810) (0xc0013bafa0) Stream added, broadcasting: 3
I0131 14:35:01.287633       9 log.go:172] (0xc000313810) Reply frame received for 3
I0131 14:35:01.287668       9 log.go:172] (0xc000313810) (0xc00294a000) Create stream
I0131 14:35:01.287683       9 log.go:172] (0xc000313810) (0xc00294a000) Stream added, broadcasting: 5
I0131 14:35:01.289922       9 log.go:172] (0xc000313810) Reply frame received for 5
I0131 14:35:01.427351       9 log.go:172] (0xc000313810) Data frame received for 3
I0131 14:35:01.427502       9 log.go:172] (0xc0013bafa0) (3) Data frame handling
I0131 14:35:01.427546       9 log.go:172] (0xc0013bafa0) (3) Data frame sent
I0131 14:35:01.583232       9 log.go:172] (0xc000313810) Data frame received for 1
I0131 14:35:01.583447       9 log.go:172] (0xc000313810) (0xc00294a000) Stream removed, broadcasting: 5
I0131 14:35:01.583498       9 log.go:172] (0xc0013bac80) (1) Data frame handling
I0131 14:35:01.583528       9 log.go:172] (0xc0013bac80) (1) Data frame sent
I0131 14:35:01.583592       9 log.go:172] (0xc000313810) (0xc0013bafa0) Stream removed, broadcasting: 3
I0131 14:35:01.583666       9 log.go:172] (0xc000313810) (0xc0013bac80) Stream removed, broadcasting: 1
I0131 14:35:01.583699       9 log.go:172] (0xc000313810) Go away received
I0131 14:35:01.584121       9 log.go:172] (0xc000313810) (0xc0013bac80) Stream removed, broadcasting: 1
I0131 14:35:01.584133       9 log.go:172] (0xc000313810) (0xc0013bafa0) Stream removed, broadcasting: 3
I0131 14:35:01.584140       9 log.go:172] (0xc000313810) (0xc00294a000) Stream removed, broadcasting: 5
Jan 31 14:35:01.584: INFO: Exec stderr: ""
Jan 31 14:35:01.584: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6082 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 14:35:01.584: INFO: >>> kubeConfig: /root/.kube/config
I0131 14:35:01.642412       9 log.go:172] (0xc000ae1a20) (0xc0013bb540) Create stream
I0131 14:35:01.642518       9 log.go:172] (0xc000ae1a20) (0xc0013bb540) Stream added, broadcasting: 1
I0131 14:35:01.649719       9 log.go:172] (0xc000ae1a20) Reply frame received for 1
I0131 14:35:01.649749       9 log.go:172] (0xc000ae1a20) (0xc0013bb680) Create stream
I0131 14:35:01.649766       9 log.go:172] (0xc000ae1a20) (0xc0013bb680) Stream added, broadcasting: 3
I0131 14:35:01.651145       9 log.go:172] (0xc000ae1a20) Reply frame received for 3
I0131 14:35:01.651175       9 log.go:172] (0xc000ae1a20) (0xc0014d8000) Create stream
I0131 14:35:01.651183       9 log.go:172] (0xc000ae1a20) (0xc0014d8000) Stream added, broadcasting: 5
I0131 14:35:01.652707       9 log.go:172] (0xc000ae1a20) Reply frame received for 5
I0131 14:35:01.761179       9 log.go:172] (0xc000ae1a20) Data frame received for 3
I0131 14:35:01.761310       9 log.go:172] (0xc0013bb680) (3) Data frame handling
I0131 14:35:01.761337       9 log.go:172] (0xc0013bb680) (3) Data frame sent
I0131 14:35:01.957354       9 log.go:172] (0xc000ae1a20) (0xc0013bb680) Stream removed, broadcasting: 3
I0131 14:35:01.957637       9 log.go:172] (0xc000ae1a20) Data frame received for 1
I0131 14:35:01.957654       9 log.go:172] (0xc0013bb540) (1) Data frame handling
I0131 14:35:01.957677       9 log.go:172] (0xc0013bb540) (1) Data frame sent
I0131 14:35:01.957816       9 log.go:172] (0xc000ae1a20) (0xc0013bb540) Stream removed, broadcasting: 1
I0131 14:35:01.957994       9 log.go:172] (0xc000ae1a20) (0xc0014d8000) Stream removed, broadcasting: 5
I0131 14:35:01.958017       9 log.go:172] (0xc000ae1a20) Go away received
I0131 14:35:01.958856       9 log.go:172] (0xc000ae1a20) (0xc0013bb540) Stream removed, broadcasting: 1
I0131 14:35:01.958895       9 log.go:172] (0xc000ae1a20) (0xc0013bb680) Stream removed, broadcasting: 3
I0131 14:35:01.958929       9 log.go:172] (0xc000ae1a20) (0xc0014d8000) Stream removed, broadcasting: 5
Jan 31 14:35:01.958: INFO: Exec stderr: ""
Jan 31 14:35:01.959: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6082 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 14:35:01.959: INFO: >>> kubeConfig: /root/.kube/config
I0131 14:35:02.058424       9 log.go:172] (0xc000611130) (0xc0011543c0) Create stream
I0131 14:35:02.058923       9 log.go:172] (0xc000611130) (0xc0011543c0) Stream added, broadcasting: 1
I0131 14:35:02.071692       9 log.go:172] (0xc000611130) Reply frame received for 1
I0131 14:35:02.071801       9 log.go:172] (0xc000611130) (0xc0014d81e0) Create stream
I0131 14:35:02.071816       9 log.go:172] (0xc000611130) (0xc0014d81e0) Stream added, broadcasting: 3
I0131 14:35:02.073995       9 log.go:172] (0xc000611130) Reply frame received for 3
I0131 14:35:02.074015       9 log.go:172] (0xc000611130) (0xc00294a0a0) Create stream
I0131 14:35:02.074023       9 log.go:172] (0xc000611130) (0xc00294a0a0) Stream added, broadcasting: 5
I0131 14:35:02.075270       9 log.go:172] (0xc000611130) Reply frame received for 5
I0131 14:35:02.174794       9 log.go:172] (0xc000611130) Data frame received for 3
I0131 14:35:02.174900       9 log.go:172] (0xc0014d81e0) (3) Data frame handling
I0131 14:35:02.174924       9 log.go:172] (0xc0014d81e0) (3) Data frame sent
I0131 14:35:02.326805       9 log.go:172] (0xc000611130) Data frame received for 1
I0131 14:35:02.327087       9 log.go:172] (0xc000611130) (0xc00294a0a0) Stream removed, broadcasting: 5
I0131 14:35:02.327169       9 log.go:172] (0xc0011543c0) (1) Data frame handling
I0131 14:35:02.327210       9 log.go:172] (0xc0011543c0) (1) Data frame sent
I0131 14:35:02.327343       9 log.go:172] (0xc000611130) (0xc0014d81e0) Stream removed, broadcasting: 3
I0131 14:35:02.327377       9 log.go:172] (0xc000611130) (0xc0011543c0) Stream removed, broadcasting: 1
I0131 14:35:02.327402       9 log.go:172] (0xc000611130) Go away received
I0131 14:35:02.327722       9 log.go:172] (0xc000611130) (0xc0011543c0) Stream removed, broadcasting: 1
I0131 14:35:02.327734       9 log.go:172] (0xc000611130) (0xc0014d81e0) Stream removed, broadcasting: 3
I0131 14:35:02.327739       9 log.go:172] (0xc000611130) (0xc00294a0a0) Stream removed, broadcasting: 5
Jan 31 14:35:02.327: INFO: Exec stderr: ""
Jan 31 14:35:02.327: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6082 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 14:35:02.328: INFO: >>> kubeConfig: /root/.kube/config
I0131 14:35:02.409771       9 log.go:172] (0xc0014ba4d0) (0xc001946280) Create stream
I0131 14:35:02.410014       9 log.go:172] (0xc0014ba4d0) (0xc001946280) Stream added, broadcasting: 1
I0131 14:35:02.416035       9 log.go:172] (0xc0014ba4d0) Reply frame received for 1
I0131 14:35:02.416081       9 log.go:172] (0xc0014ba4d0) (0xc0013bb9a0) Create stream
I0131 14:35:02.416092       9 log.go:172] (0xc0014ba4d0) (0xc0013bb9a0) Stream added, broadcasting: 3
I0131 14:35:02.417112       9 log.go:172] (0xc0014ba4d0) Reply frame received for 3
I0131 14:35:02.417134       9 log.go:172] (0xc0014ba4d0) (0xc0014d83c0) Create stream
I0131 14:35:02.417141       9 log.go:172] (0xc0014ba4d0) (0xc0014d83c0) Stream added, broadcasting: 5
I0131 14:35:02.418566       9 log.go:172] (0xc0014ba4d0) Reply frame received for 5
I0131 14:35:02.576301       9 log.go:172] (0xc0014ba4d0) Data frame received for 3
I0131 14:35:02.576562       9 log.go:172] (0xc0013bb9a0) (3) Data frame handling
I0131 14:35:02.576598       9 log.go:172] (0xc0013bb9a0) (3) Data frame sent
I0131 14:35:02.727853       9 log.go:172] (0xc0014ba4d0) (0xc0013bb9a0) Stream removed, broadcasting: 3
I0131 14:35:02.727995       9 log.go:172] (0xc0014ba4d0) Data frame received for 1
I0131 14:35:02.728281       9 log.go:172] (0xc0014ba4d0) (0xc0014d83c0) Stream removed, broadcasting: 5
I0131 14:35:02.728332       9 log.go:172] (0xc001946280) (1) Data frame handling
I0131 14:35:02.728360       9 log.go:172] (0xc001946280) (1) Data frame sent
I0131 14:35:02.728377       9 log.go:172] (0xc0014ba4d0) (0xc001946280) Stream removed, broadcasting: 1
I0131 14:35:02.728396       9 log.go:172] (0xc0014ba4d0) Go away received
I0131 14:35:02.728764       9 log.go:172] (0xc0014ba4d0) (0xc001946280) Stream removed, broadcasting: 1
I0131 14:35:02.728784       9 log.go:172] (0xc0014ba4d0) (0xc0013bb9a0) Stream removed, broadcasting: 3
I0131 14:35:02.728796       9 log.go:172] (0xc0014ba4d0) (0xc0014d83c0) Stream removed, broadcasting: 5
Jan 31 14:35:02.728: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 31 14:35:02.729: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6082 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 14:35:02.729: INFO: >>> kubeConfig: /root/.kube/config
I0131 14:35:02.804798       9 log.go:172] (0xc00092d760) (0xc00294a3c0) Create stream
I0131 14:35:02.805095       9 log.go:172] (0xc00092d760) (0xc00294a3c0) Stream added, broadcasting: 1
I0131 14:35:02.814859       9 log.go:172] (0xc00092d760) Reply frame received for 1
I0131 14:35:02.814903       9 log.go:172] (0xc00092d760) (0xc0026b6000) Create stream
I0131 14:35:02.814912       9 log.go:172] (0xc00092d760) (0xc0026b6000) Stream added, broadcasting: 3
I0131 14:35:02.817262       9 log.go:172] (0xc00092d760) Reply frame received for 3
I0131 14:35:02.817284       9 log.go:172] (0xc00092d760) (0xc00294a460) Create stream
I0131 14:35:02.817290       9 log.go:172] (0xc00092d760) (0xc00294a460) Stream added, broadcasting: 5
I0131 14:35:02.818750       9 log.go:172] (0xc00092d760) Reply frame received for 5
I0131 14:35:02.947479       9 log.go:172] (0xc00092d760) Data frame received for 3
I0131 14:35:02.947558       9 log.go:172] (0xc0026b6000) (3) Data frame handling
I0131 14:35:02.947583       9 log.go:172] (0xc0026b6000) (3) Data frame sent
I0131 14:35:03.093370       9 log.go:172] (0xc00092d760) Data frame received for 1
I0131 14:35:03.093753       9 log.go:172] (0xc00092d760) (0xc00294a460) Stream removed, broadcasting: 5
I0131 14:35:03.093814       9 log.go:172] (0xc00294a3c0) (1) Data frame handling
I0131 14:35:03.093844       9 log.go:172] (0xc00294a3c0) (1) Data frame sent
I0131 14:35:03.093881       9 log.go:172] (0xc00092d760) (0xc0026b6000) Stream removed, broadcasting: 3
I0131 14:35:03.093919       9 log.go:172] (0xc00092d760) (0xc00294a3c0) Stream removed, broadcasting: 1
I0131 14:35:03.093936       9 log.go:172] (0xc00092d760) Go away received
I0131 14:35:03.094434       9 log.go:172] (0xc00092d760) (0xc00294a3c0) Stream removed, broadcasting: 1
I0131 14:35:03.094450       9 log.go:172] (0xc00092d760) (0xc0026b6000) Stream removed, broadcasting: 3
I0131 14:35:03.094460       9 log.go:172] (0xc00092d760) (0xc00294a460) Stream removed, broadcasting: 5
Jan 31 14:35:03.094: INFO: Exec stderr: ""
Jan 31 14:35:03.094: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6082 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 14:35:03.094: INFO: >>> kubeConfig: /root/.kube/config
I0131 14:35:03.155823       9 log.go:172] (0xc000611e40) (0xc001154960) Create stream
I0131 14:35:03.155959       9 log.go:172] (0xc000611e40) (0xc001154960) Stream added, broadcasting: 1
I0131 14:35:03.163708       9 log.go:172] (0xc000611e40) Reply frame received for 1
I0131 14:35:03.163743       9 log.go:172] (0xc000611e40) (0xc00294a500) Create stream
I0131 14:35:03.163753       9 log.go:172] (0xc000611e40) (0xc00294a500) Stream added, broadcasting: 3
I0131 14:35:03.168115       9 log.go:172] (0xc000611e40) Reply frame received for 3
I0131 14:35:03.168133       9 log.go:172] (0xc000611e40) (0xc00294a5a0) Create stream
I0131 14:35:03.168152       9 log.go:172] (0xc000611e40) (0xc00294a5a0) Stream added, broadcasting: 5
I0131 14:35:03.171038       9 log.go:172] (0xc000611e40) Reply frame received for 5
I0131 14:35:03.258148       9 log.go:172] (0xc000611e40) Data frame received for 3
I0131 14:35:03.258275       9 log.go:172] (0xc00294a500) (3) Data frame handling
I0131 14:35:03.258324       9 log.go:172] (0xc00294a500) (3) Data frame sent
I0131 14:35:03.390214       9 log.go:172] (0xc000611e40) (0xc00294a5a0) Stream removed, broadcasting: 5
I0131 14:35:03.390383       9 log.go:172] (0xc000611e40) Data frame received for 1
I0131 14:35:03.390422       9 log.go:172] (0xc000611e40) (0xc00294a500) Stream removed, broadcasting: 3
I0131 14:35:03.390452       9 log.go:172] (0xc001154960) (1) Data frame handling
I0131 14:35:03.390484       9 log.go:172] (0xc001154960) (1) Data frame sent
I0131 14:35:03.390500       9 log.go:172] (0xc000611e40) (0xc001154960) Stream removed, broadcasting: 1
I0131 14:35:03.390516       9 log.go:172] (0xc000611e40) Go away received
I0131 14:35:03.391168       9 log.go:172] (0xc000611e40) (0xc001154960) Stream removed, broadcasting: 1
I0131 14:35:03.391187       9 log.go:172] (0xc000611e40) (0xc00294a500) Stream removed, broadcasting: 3
I0131 14:35:03.391197       9 log.go:172] (0xc000611e40) (0xc00294a5a0) Stream removed, broadcasting: 5
Jan 31 14:35:03.391: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 31 14:35:03.391: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6082 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 14:35:03.391: INFO: >>> kubeConfig: /root/.kube/config
I0131 14:35:03.450669       9 log.go:172] (0xc0010e11e0) (0xc002abe0a0) Create stream
I0131 14:35:03.450835       9 log.go:172] (0xc0010e11e0) (0xc002abe0a0) Stream added, broadcasting: 1
I0131 14:35:03.462296       9 log.go:172] (0xc0010e11e0) Reply frame received for 1
I0131 14:35:03.462350       9 log.go:172] (0xc0010e11e0) (0xc001154aa0) Create stream
I0131 14:35:03.462360       9 log.go:172] (0xc0010e11e0) (0xc001154aa0) Stream added, broadcasting: 3
I0131 14:35:03.464648       9 log.go:172] (0xc0010e11e0) Reply frame received for 3
I0131 14:35:03.464671       9 log.go:172] (0xc0010e11e0) (0xc001946320) Create stream
I0131 14:35:03.464682       9 log.go:172] (0xc0010e11e0) (0xc001946320) Stream added, broadcasting: 5
I0131 14:35:03.466484       9 log.go:172] (0xc0010e11e0) Reply frame received for 5
I0131 14:35:03.581897       9 log.go:172] (0xc0010e11e0) Data frame received for 3
I0131 14:35:03.582075       9 log.go:172] (0xc001154aa0) (3) Data frame handling
I0131 14:35:03.582133       9 log.go:172] (0xc001154aa0) (3) Data frame sent
I0131 14:35:03.800041       9 log.go:172] (0xc0010e11e0) Data frame received for 1
I0131 14:35:03.800332       9 log.go:172] (0xc0010e11e0) (0xc001154aa0) Stream removed, broadcasting: 3
I0131 14:35:03.800543       9 log.go:172] (0xc002abe0a0) (1) Data frame handling
I0131 14:35:03.800646       9 log.go:172] (0xc002abe0a0) (1) Data frame sent
I0131 14:35:03.800661       9 log.go:172] (0xc0010e11e0) (0xc002abe0a0) Stream removed, broadcasting: 1
I0131 14:35:03.801329       9 log.go:172] (0xc0010e11e0) (0xc001946320) Stream removed, broadcasting: 5
I0131 14:35:03.801402       9 log.go:172] (0xc0010e11e0) Go away received
I0131 14:35:03.801432       9 log.go:172] (0xc0010e11e0) (0xc002abe0a0) Stream removed, broadcasting: 1
I0131 14:35:03.801450       9 log.go:172] (0xc0010e11e0) (0xc001154aa0) Stream removed, broadcasting: 3
I0131 14:35:03.801467       9 log.go:172] (0xc0010e11e0) (0xc001946320) Stream removed, broadcasting: 5
Jan 31 14:35:03.801: INFO: Exec stderr: ""
Jan 31 14:35:03.801: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6082 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 14:35:03.801: INFO: >>> kubeConfig: /root/.kube/config
I0131 14:35:03.998368       9 log.go:172] (0xc001de8c60) (0xc0011555e0) Create stream
I0131 14:35:03.998613       9 log.go:172] (0xc001de8c60) (0xc0011555e0) Stream added, broadcasting: 1
I0131 14:35:04.028276       9 log.go:172] (0xc001de8c60) Reply frame received for 1
I0131 14:35:04.028500       9 log.go:172] (0xc001de8c60) (0xc00294a640) Create stream
I0131 14:35:04.028527       9 log.go:172] (0xc001de8c60) (0xc00294a640) Stream added, broadcasting: 3
I0131 14:35:04.033452       9 log.go:172] (0xc001de8c60) Reply frame received for 3
I0131 14:35:04.033573       9 log.go:172] (0xc001de8c60) (0xc002abe140) Create stream
I0131 14:35:04.033615       9 log.go:172] (0xc001de8c60) (0xc002abe140) Stream added, broadcasting: 5
I0131 14:35:04.040979       9 log.go:172] (0xc001de8c60) Reply frame received for 5
I0131 14:35:04.296706       9 log.go:172] (0xc001de8c60) Data frame received for 3
I0131 14:35:04.296827       9 log.go:172] (0xc00294a640) (3) Data frame handling
I0131 14:35:04.296861       9 log.go:172] (0xc00294a640) (3) Data frame sent
I0131 14:35:04.424404       9 log.go:172] (0xc001de8c60) Data frame received for 1
I0131 14:35:04.424593       9 log.go:172] (0xc001de8c60) (0xc00294a640) Stream removed, broadcasting: 3
I0131 14:35:04.424638       9 log.go:172] (0xc0011555e0) (1) Data frame handling
I0131 14:35:04.424670       9 log.go:172] (0xc0011555e0) (1) Data frame sent
I0131 14:35:04.424702       9 log.go:172] (0xc001de8c60) (0xc002abe140) Stream removed, broadcasting: 5
I0131 14:35:04.424755       9 log.go:172] (0xc001de8c60) (0xc0011555e0) Stream removed, broadcasting: 1
I0131 14:35:04.424769       9 log.go:172] (0xc001de8c60) Go away received
I0131 14:35:04.425232       9 log.go:172] (0xc001de8c60) (0xc0011555e0) Stream removed, broadcasting: 1
I0131 14:35:04.425240       9 log.go:172] (0xc001de8c60) (0xc00294a640) Stream removed, broadcasting: 3
I0131 14:35:04.425248       9 log.go:172] (0xc001de8c60) (0xc002abe140) Stream removed, broadcasting: 5
Jan 31 14:35:04.425: INFO: Exec stderr: ""
Jan 31 14:35:04.425: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6082 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 14:35:04.426: INFO: >>> kubeConfig: /root/.kube/config
I0131 14:35:04.492997       9 log.go:172] (0xc0018d6840) (0xc0026b6320) Create stream
I0131 14:35:04.493211       9 log.go:172] (0xc0018d6840) (0xc0026b6320) Stream added, broadcasting: 1
I0131 14:35:04.503278       9 log.go:172] (0xc0018d6840) Reply frame received for 1
I0131 14:35:04.503375       9 log.go:172] (0xc0018d6840) (0xc001946460) Create stream
I0131 14:35:04.503393       9 log.go:172] (0xc0018d6840) (0xc001946460) Stream added, broadcasting: 3
I0131 14:35:04.505091       9 log.go:172] (0xc0018d6840) Reply frame received for 3
I0131 14:35:04.505117       9 log.go:172] (0xc0018d6840) (0xc00294a6e0) Create stream
I0131 14:35:04.505129       9 log.go:172] (0xc0018d6840) (0xc00294a6e0) Stream added, broadcasting: 5
I0131 14:35:04.510509       9 log.go:172] (0xc0018d6840) Reply frame received for 5
I0131 14:35:04.604823       9 log.go:172] (0xc0018d6840) Data frame received for 3
I0131 14:35:04.604962       9 log.go:172] (0xc001946460) (3) Data frame handling
I0131 14:35:04.604992       9 log.go:172] (0xc001946460) (3) Data frame sent
I0131 14:35:04.690071       9 log.go:172] (0xc0018d6840) (0xc001946460) Stream removed, broadcasting: 3
I0131 14:35:04.690194       9 log.go:172] (0xc0018d6840) (0xc00294a6e0) Stream removed, broadcasting: 5
I0131 14:35:04.690224       9 log.go:172] (0xc0018d6840) Data frame received for 1
I0131 14:35:04.690267       9 log.go:172] (0xc0026b6320) (1) Data frame handling
I0131 14:35:04.690308       9 log.go:172] (0xc0026b6320) (1) Data frame sent
I0131 14:35:04.690332       9 log.go:172] (0xc0018d6840) (0xc0026b6320) Stream removed, broadcasting: 1
I0131 14:35:04.690462       9 log.go:172] (0xc0018d6840) Go away received
I0131 14:35:04.690862       9 log.go:172] (0xc0018d6840) (0xc0026b6320) Stream removed, broadcasting: 1
I0131 14:35:04.690880       9 log.go:172] (0xc0018d6840) (0xc001946460) Stream removed, broadcasting: 3
I0131 14:35:04.690889       9 log.go:172] (0xc0018d6840) (0xc00294a6e0) Stream removed, broadcasting: 5
Jan 31 14:35:04.690: INFO: Exec stderr: ""
Jan 31 14:35:04.691: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6082 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 14:35:04.691: INFO: >>> kubeConfig: /root/.kube/config
I0131 14:35:04.740237       9 log.go:172] (0xc0018d74a0) (0xc0026b6640) Create stream
I0131 14:35:04.740340       9 log.go:172] (0xc0018d74a0) (0xc0026b6640) Stream added, broadcasting: 1
I0131 14:35:04.744932       9 log.go:172] (0xc0018d74a0) Reply frame received for 1
I0131 14:35:04.744976       9 log.go:172] (0xc0018d74a0) (0xc002abe1e0) Create stream
I0131 14:35:04.744988       9 log.go:172] (0xc0018d74a0) (0xc002abe1e0) Stream added, broadcasting: 3
I0131 14:35:04.746287       9 log.go:172] (0xc0018d74a0) Reply frame received for 3
I0131 14:35:04.746309       9 log.go:172] (0xc0018d74a0) (0xc0011557c0) Create stream
I0131 14:35:04.746319       9 log.go:172] (0xc0018d74a0) (0xc0011557c0) Stream added, broadcasting: 5
I0131 14:35:04.747574       9 log.go:172] (0xc0018d74a0) Reply frame received for 5
I0131 14:35:04.845080       9 log.go:172] (0xc0018d74a0) Data frame received for 3
I0131 14:35:04.845127       9 log.go:172] (0xc002abe1e0) (3) Data frame handling
I0131 14:35:04.845159       9 log.go:172] (0xc002abe1e0) (3) Data frame sent
I0131 14:35:04.996415       9 log.go:172] (0xc0018d74a0) Data frame received for 1
I0131 14:35:04.996721       9 log.go:172] (0xc0018d74a0) (0xc002abe1e0) Stream removed, broadcasting: 3
I0131 14:35:04.996801       9 log.go:172] (0xc0026b6640) (1) Data frame handling
I0131 14:35:04.996838       9 log.go:172] (0xc0026b6640) (1) Data frame sent
I0131 14:35:04.996884       9 log.go:172] (0xc0018d74a0) (0xc0011557c0) Stream removed, broadcasting: 5
I0131 14:35:04.996930       9 log.go:172] (0xc0018d74a0) (0xc0026b6640) Stream removed, broadcasting: 1
I0131 14:35:04.996982       9 log.go:172] (0xc0018d74a0) Go away received
I0131 14:35:04.998400       9 log.go:172] (0xc0018d74a0) (0xc0026b6640) Stream removed, broadcasting: 1
I0131 14:35:04.998463       9 log.go:172] (0xc0018d74a0) (0xc002abe1e0) Stream removed, broadcasting: 3
I0131 14:35:04.998494       9 log.go:172] (0xc0018d74a0) (0xc0011557c0) Stream removed, broadcasting: 5
Jan 31 14:35:04.998: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:35:04.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-6082" for this suite.
Jan 31 14:35:49.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:35:49.154: INFO: namespace e2e-kubelet-etc-hosts-6082 deletion completed in 44.145640632s

• [SLOW TEST:68.205 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:35:49.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan 31 14:36:01.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-81f8fd76-9f5d-493a-9b84-94e435acc017 -c busybox-main-container --namespace=emptydir-5103 -- cat /usr/share/volumeshare/shareddata.txt'
Jan 31 14:36:01.742: INFO: stderr: "I0131 14:36:01.487936    3068 log.go:172] (0xc000958370) (0xc000968820) Create stream\nI0131 14:36:01.488290    3068 log.go:172] (0xc000958370) (0xc000968820) Stream added, broadcasting: 1\nI0131 14:36:01.494321    3068 log.go:172] (0xc000958370) Reply frame received for 1\nI0131 14:36:01.494355    3068 log.go:172] (0xc000958370) (0xc00095a000) Create stream\nI0131 14:36:01.494364    3068 log.go:172] (0xc000958370) (0xc00095a000) Stream added, broadcasting: 3\nI0131 14:36:01.495881    3068 log.go:172] (0xc000958370) Reply frame received for 3\nI0131 14:36:01.495906    3068 log.go:172] (0xc000958370) (0xc0005e2280) Create stream\nI0131 14:36:01.495918    3068 log.go:172] (0xc000958370) (0xc0005e2280) Stream added, broadcasting: 5\nI0131 14:36:01.497056    3068 log.go:172] (0xc000958370) Reply frame received for 5\nI0131 14:36:01.598254    3068 log.go:172] (0xc000958370) Data frame received for 3\nI0131 14:36:01.598346    3068 log.go:172] (0xc00095a000) (3) Data frame handling\nI0131 14:36:01.598364    3068 log.go:172] (0xc00095a000) (3) Data frame sent\nI0131 14:36:01.730752    3068 log.go:172] (0xc000958370) Data frame received for 1\nI0131 14:36:01.730917    3068 log.go:172] (0xc000958370) (0xc0005e2280) Stream removed, broadcasting: 5\nI0131 14:36:01.731064    3068 log.go:172] (0xc000958370) (0xc00095a000) Stream removed, broadcasting: 3\nI0131 14:36:01.731116    3068 log.go:172] (0xc000968820) (1) Data frame handling\nI0131 14:36:01.731129    3068 log.go:172] (0xc000968820) (1) Data frame sent\nI0131 14:36:01.731135    3068 log.go:172] (0xc000958370) (0xc000968820) Stream removed, broadcasting: 1\nI0131 14:36:01.731145    3068 log.go:172] (0xc000958370) Go away received\nI0131 14:36:01.732403    3068 log.go:172] (0xc000958370) (0xc000968820) Stream removed, broadcasting: 1\nI0131 14:36:01.732441    3068 log.go:172] (0xc000958370) (0xc00095a000) Stream removed, broadcasting: 3\nI0131 14:36:01.732456    3068 log.go:172] (0xc000958370) (0xc0005e2280) Stream removed, broadcasting: 5\n"
Jan 31 14:36:01.742: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:36:01.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5103" for this suite.
Jan 31 14:36:08.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:36:08.310: INFO: namespace emptydir-5103 deletion completed in 6.157286754s

• [SLOW TEST:19.156 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:36:08.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-c7rj
STEP: Creating a pod to test atomic-volume-subpath
Jan 31 14:36:08.461: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-c7rj" in namespace "subpath-4763" to be "success or failure"
Jan 31 14:36:08.474: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.469962ms
Jan 31 14:36:10.492: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030633715s
Jan 31 14:36:12.507: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045344104s
Jan 31 14:36:14.526: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064215599s
Jan 31 14:36:16.553: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Running", Reason="", readiness=true. Elapsed: 8.091223917s
Jan 31 14:36:18.577: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Running", Reason="", readiness=true. Elapsed: 10.115859392s
Jan 31 14:36:20.587: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Running", Reason="", readiness=true. Elapsed: 12.125911253s
Jan 31 14:36:22.600: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Running", Reason="", readiness=true. Elapsed: 14.138312986s
Jan 31 14:36:24.610: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Running", Reason="", readiness=true. Elapsed: 16.14813613s
Jan 31 14:36:26.628: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Running", Reason="", readiness=true. Elapsed: 18.166955051s
Jan 31 14:36:28.649: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Running", Reason="", readiness=true. Elapsed: 20.18714246s
Jan 31 14:36:30.656: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Running", Reason="", readiness=true. Elapsed: 22.194920699s
Jan 31 14:36:32.668: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Running", Reason="", readiness=true. Elapsed: 24.206421563s
Jan 31 14:36:34.683: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Running", Reason="", readiness=true. Elapsed: 26.221514571s
Jan 31 14:36:36.690: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Running", Reason="", readiness=true. Elapsed: 28.228807195s
Jan 31 14:36:38.699: INFO: Pod "pod-subpath-test-configmap-c7rj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.237639538s
STEP: Saw pod success
Jan 31 14:36:38.699: INFO: Pod "pod-subpath-test-configmap-c7rj" satisfied condition "success or failure"
Jan 31 14:36:38.704: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-c7rj container test-container-subpath-configmap-c7rj: 
STEP: delete the pod
Jan 31 14:36:38.762: INFO: Waiting for pod pod-subpath-test-configmap-c7rj to disappear
Jan 31 14:36:38.769: INFO: Pod pod-subpath-test-configmap-c7rj no longer exists
STEP: Deleting pod pod-subpath-test-configmap-c7rj
Jan 31 14:36:38.769: INFO: Deleting pod "pod-subpath-test-configmap-c7rj" in namespace "subpath-4763"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:36:38.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4763" for this suite.
Jan 31 14:36:44.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:36:45.005: INFO: namespace subpath-4763 deletion completed in 6.22727184s

• [SLOW TEST:36.694 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:36:45.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Jan 31 14:36:45.071: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:36:45.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6799" for this suite.
Jan 31 14:36:51.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:36:51.343: INFO: namespace kubectl-6799 deletion completed in 6.160858721s

• [SLOW TEST:6.335 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:36:51.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 31 14:37:07.564: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 14:37:07.649: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 14:37:09.649: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 14:37:09.656: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 14:37:11.650: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 14:37:11.662: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 14:37:13.650: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 14:37:13.662: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 14:37:15.649: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 14:37:15.658: INFO: Pod pod-with-prestop-http-hook still exists
Jan 31 14:37:17.649: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 31 14:37:17.658: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:37:17.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3427" for this suite.
Jan 31 14:37:39.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:37:39.816: INFO: namespace container-lifecycle-hook-3427 deletion completed in 22.119926725s

• [SLOW TEST:48.472 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:37:39.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan 31 14:37:39.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2306'
Jan 31 14:37:42.521: INFO: stderr: ""
Jan 31 14:37:42.521: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 31 14:37:43.539: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:37:43.539: INFO: Found 0 / 1
Jan 31 14:37:44.551: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:37:44.552: INFO: Found 0 / 1
Jan 31 14:37:45.541: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:37:45.541: INFO: Found 0 / 1
Jan 31 14:37:46.537: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:37:46.537: INFO: Found 0 / 1
Jan 31 14:37:47.533: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:37:47.533: INFO: Found 0 / 1
Jan 31 14:37:48.538: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:37:48.539: INFO: Found 0 / 1
Jan 31 14:37:49.539: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:37:49.540: INFO: Found 0 / 1
Jan 31 14:37:50.549: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:37:50.549: INFO: Found 1 / 1
Jan 31 14:37:50.549: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 31 14:37:50.559: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:37:50.559: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 31 14:37:50.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-dmlvc --namespace=kubectl-2306 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 31 14:37:50.728: INFO: stderr: ""
Jan 31 14:37:50.728: INFO: stdout: "pod/redis-master-dmlvc patched\n"
STEP: checking annotations
Jan 31 14:37:50.733: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:37:50.733: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:37:50.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2306" for this suite.
Jan 31 14:38:12.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:38:12.895: INFO: namespace kubectl-2306 deletion completed in 22.155997808s

• [SLOW TEST:33.079 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:38:12.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-132/configmap-test-4cc97247-fbdd-4b44-8c57-1d53ec68554d
STEP: Creating a pod to test consume configMaps
Jan 31 14:38:13.060: INFO: Waiting up to 5m0s for pod "pod-configmaps-bbd67d61-cfa4-4200-a033-ab05ba0f8bd3" in namespace "configmap-132" to be "success or failure"
Jan 31 14:38:13.080: INFO: Pod "pod-configmaps-bbd67d61-cfa4-4200-a033-ab05ba0f8bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 19.820133ms
Jan 31 14:38:15.096: INFO: Pod "pod-configmaps-bbd67d61-cfa4-4200-a033-ab05ba0f8bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035494732s
Jan 31 14:38:17.111: INFO: Pod "pod-configmaps-bbd67d61-cfa4-4200-a033-ab05ba0f8bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050874746s
Jan 31 14:38:19.117: INFO: Pod "pod-configmaps-bbd67d61-cfa4-4200-a033-ab05ba0f8bd3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056587657s
Jan 31 14:38:21.130: INFO: Pod "pod-configmaps-bbd67d61-cfa4-4200-a033-ab05ba0f8bd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069505394s
STEP: Saw pod success
Jan 31 14:38:21.130: INFO: Pod "pod-configmaps-bbd67d61-cfa4-4200-a033-ab05ba0f8bd3" satisfied condition "success or failure"
Jan 31 14:38:21.135: INFO: Trying to get logs from node iruya-node pod pod-configmaps-bbd67d61-cfa4-4200-a033-ab05ba0f8bd3 container env-test: 
STEP: delete the pod
Jan 31 14:38:21.285: INFO: Waiting for pod pod-configmaps-bbd67d61-cfa4-4200-a033-ab05ba0f8bd3 to disappear
Jan 31 14:38:21.302: INFO: Pod pod-configmaps-bbd67d61-cfa4-4200-a033-ab05ba0f8bd3 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:38:21.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-132" for this suite.
Jan 31 14:38:27.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:38:27.471: INFO: namespace configmap-132 deletion completed in 6.16256031s

• [SLOW TEST:14.575 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:38:27.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Jan 31 14:38:27.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4794'
Jan 31 14:38:28.078: INFO: stderr: ""
Jan 31 14:38:28.079: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 14:38:28.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4794'
Jan 31 14:38:28.240: INFO: stderr: ""
Jan 31 14:38:28.240: INFO: stdout: "update-demo-nautilus-4xj4f update-demo-nautilus-rbdtb "
Jan 31 14:38:28.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xj4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4794'
Jan 31 14:38:28.449: INFO: stderr: ""
Jan 31 14:38:28.450: INFO: stdout: ""
Jan 31 14:38:28.450: INFO: update-demo-nautilus-4xj4f is created but not running
Jan 31 14:38:33.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4794'
Jan 31 14:38:35.697: INFO: stderr: ""
Jan 31 14:38:35.697: INFO: stdout: "update-demo-nautilus-4xj4f update-demo-nautilus-rbdtb "
Jan 31 14:38:35.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xj4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4794'
Jan 31 14:38:36.204: INFO: stderr: ""
Jan 31 14:38:36.205: INFO: stdout: ""
Jan 31 14:38:36.205: INFO: update-demo-nautilus-4xj4f is created but not running
Jan 31 14:38:41.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4794'
Jan 31 14:38:41.441: INFO: stderr: ""
Jan 31 14:38:41.442: INFO: stdout: "update-demo-nautilus-4xj4f update-demo-nautilus-rbdtb "
Jan 31 14:38:41.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xj4f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4794'
Jan 31 14:38:41.562: INFO: stderr: ""
Jan 31 14:38:41.562: INFO: stdout: "true"
Jan 31 14:38:41.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xj4f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4794'
Jan 31 14:38:41.720: INFO: stderr: ""
Jan 31 14:38:41.720: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 14:38:41.720: INFO: validating pod update-demo-nautilus-4xj4f
Jan 31 14:38:41.732: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 14:38:41.733: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 14:38:41.733: INFO: update-demo-nautilus-4xj4f is verified up and running
Jan 31 14:38:41.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbdtb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4794'
Jan 31 14:38:41.862: INFO: stderr: ""
Jan 31 14:38:41.862: INFO: stdout: "true"
Jan 31 14:38:41.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbdtb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4794'
Jan 31 14:38:42.041: INFO: stderr: ""
Jan 31 14:38:42.041: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 31 14:38:42.041: INFO: validating pod update-demo-nautilus-rbdtb
Jan 31 14:38:42.049: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 31 14:38:42.049: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 31 14:38:42.049: INFO: update-demo-nautilus-rbdtb is verified up and running
STEP: rolling-update to new replication controller
Jan 31 14:38:42.052: INFO: scanned /root for discovery docs: 
Jan 31 14:38:42.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4794'
Jan 31 14:39:12.822: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 31 14:39:12.823: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 31 14:39:12.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4794'
Jan 31 14:39:13.028: INFO: stderr: ""
Jan 31 14:39:13.028: INFO: stdout: "update-demo-kitten-9cdgn update-demo-kitten-xvw4r "
Jan 31 14:39:13.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9cdgn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4794'
Jan 31 14:39:13.232: INFO: stderr: ""
Jan 31 14:39:13.232: INFO: stdout: "true"
Jan 31 14:39:13.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9cdgn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4794'
Jan 31 14:39:13.394: INFO: stderr: ""
Jan 31 14:39:13.394: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 31 14:39:13.395: INFO: validating pod update-demo-kitten-9cdgn
Jan 31 14:39:13.428: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 31 14:39:13.429: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 31 14:39:13.429: INFO: update-demo-kitten-9cdgn is verified up and running
Jan 31 14:39:13.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xvw4r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4794'
Jan 31 14:39:13.565: INFO: stderr: ""
Jan 31 14:39:13.565: INFO: stdout: "true"
Jan 31 14:39:13.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xvw4r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4794'
Jan 31 14:39:13.755: INFO: stderr: ""
Jan 31 14:39:13.755: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 31 14:39:13.755: INFO: validating pod update-demo-kitten-xvw4r
Jan 31 14:39:13.796: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 31 14:39:13.796: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 31 14:39:13.796: INFO: update-demo-kitten-xvw4r is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:39:13.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4794" for this suite.
Jan 31 14:39:41.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:39:41.980: INFO: namespace kubectl-4794 deletion completed in 28.173140665s

• [SLOW TEST:74.509 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:39:41.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 31 14:39:42.091: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0791062-a79b-4fcf-a076-384ef19d3975" in namespace "projected-622" to be "success or failure"
Jan 31 14:39:42.101: INFO: Pod "downwardapi-volume-f0791062-a79b-4fcf-a076-384ef19d3975": Phase="Pending", Reason="", readiness=false. Elapsed: 9.71817ms
Jan 31 14:39:44.118: INFO: Pod "downwardapi-volume-f0791062-a79b-4fcf-a076-384ef19d3975": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027561337s
Jan 31 14:39:46.127: INFO: Pod "downwardapi-volume-f0791062-a79b-4fcf-a076-384ef19d3975": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035748313s
Jan 31 14:39:48.138: INFO: Pod "downwardapi-volume-f0791062-a79b-4fcf-a076-384ef19d3975": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047267384s
Jan 31 14:39:50.149: INFO: Pod "downwardapi-volume-f0791062-a79b-4fcf-a076-384ef19d3975": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058609681s
STEP: Saw pod success
Jan 31 14:39:50.150: INFO: Pod "downwardapi-volume-f0791062-a79b-4fcf-a076-384ef19d3975" satisfied condition "success or failure"
Jan 31 14:39:50.154: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f0791062-a79b-4fcf-a076-384ef19d3975 container client-container: 
STEP: delete the pod
Jan 31 14:39:50.245: INFO: Waiting for pod downwardapi-volume-f0791062-a79b-4fcf-a076-384ef19d3975 to disappear
Jan 31 14:39:50.277: INFO: Pod downwardapi-volume-f0791062-a79b-4fcf-a076-384ef19d3975 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:39:50.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-622" for this suite.
Jan 31 14:39:56.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:39:56.467: INFO: namespace projected-622 deletion completed in 6.161590557s

• [SLOW TEST:14.486 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:39:56.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 31 14:39:56.637: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 31 14:40:01.650: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 31 14:40:05.665: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 31 14:40:07.675: INFO: Creating deployment "test-rollover-deployment"
Jan 31 14:40:07.700: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 31 14:40:09.730: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 31 14:40:09.743: INFO: Ensure that both replica sets have 1 created replica
Jan 31 14:40:09.752: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 31 14:40:09.765: INFO: Updating deployment test-rollover-deployment
Jan 31 14:40:09.765: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 31 14:40:11.791: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 31 14:40:11.800: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 31 14:40:11.811: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 14:40:11.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 14:40:13.825: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 14:40:13.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 14:40:15.828: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 14:40:15.828: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 14:40:17.842: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 14:40:17.842: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078410, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 14:40:19.830: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 14:40:19.830: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078418, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 14:40:21.828: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 14:40:21.829: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078418, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 14:40:23.830: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 14:40:23.830: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078418, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 14:40:25.842: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 14:40:25.842: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078418, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 14:40:27.857: INFO: all replica sets need to contain the pod-template-hash label
Jan 31 14:40:27.858: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078418, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078407, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 14:40:29.853: INFO: 
Jan 31 14:40:29.854: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 31 14:40:29.881: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-1710,SelfLink:/apis/apps/v1/namespaces/deployment-1710/deployments/test-rollover-deployment,UID:212564c0-c416-49c8-9add-e6282ca9d100,ResourceVersion:22576127,Generation:2,CreationTimestamp:2020-01-31 14:40:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-31 14:40:07 +0000 UTC 2020-01-31 14:40:07 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-31 14:40:28 +0000 UTC 2020-01-31 14:40:07 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 31 14:40:29.889: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-1710,SelfLink:/apis/apps/v1/namespaces/deployment-1710/replicasets/test-rollover-deployment-854595fc44,UID:b4d9e562-580d-42f6-b0ad-1d3c14c97efe,ResourceVersion:22576118,Generation:2,CreationTimestamp:2020-01-31 14:40:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 212564c0-c416-49c8-9add-e6282ca9d100 0xc0028adcc7 0xc0028adcc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 31 14:40:29.889: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 31 14:40:29.890: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-1710,SelfLink:/apis/apps/v1/namespaces/deployment-1710/replicasets/test-rollover-controller,UID:857cddbf-2ab4-4c62-a2da-219035d284d8,ResourceVersion:22576126,Generation:2,CreationTimestamp:2020-01-31 14:39:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 212564c0-c416-49c8-9add-e6282ca9d100 0xc0028adbf7 0xc0028adbf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 31 14:40:29.890: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-1710,SelfLink:/apis/apps/v1/namespaces/deployment-1710/replicasets/test-rollover-deployment-9b8b997cf,UID:4570d447-45a2-48a8-8cf5-0f1b9f883fcd,ResourceVersion:22576085,Generation:2,CreationTimestamp:2020-01-31 14:40:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 212564c0-c416-49c8-9add-e6282ca9d100 0xc0028add90 0xc0028add91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 31 14:40:29.901: INFO: Pod "test-rollover-deployment-854595fc44-ks928" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-ks928,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-1710,SelfLink:/api/v1/namespaces/deployment-1710/pods/test-rollover-deployment-854595fc44-ks928,UID:259f09b2-9c05-404b-9407-c10919599e17,ResourceVersion:22576102,Generation:0,CreationTimestamp:2020-01-31 14:40:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 b4d9e562-580d-42f6-b0ad-1d3c14c97efe 0xc003053af7 0xc003053af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7dhrd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7dhrd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-7dhrd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003053b70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003053b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:40:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:40:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:40:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:40:10 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-31 14:40:10 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-31 14:40:16 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a77429a9113378eab8d91aae9b764a617754ded8a24ee98cbd735c061dbb0243}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:40:29.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1710" for this suite.
Jan 31 14:40:36.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:40:36.194: INFO: namespace deployment-1710 deletion completed in 6.286895428s

• [SLOW TEST:39.727 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:40:36.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3090.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3090.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3090.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3090.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3090.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3090.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3090.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3090.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3090.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 140.76.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.76.140_udp@PTR;check="$$(dig +tcp +noall +answer +search 140.76.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.76.140_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3090.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3090.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3090.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3090.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3090.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3090.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3090.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3090.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3090.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3090.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 140.76.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.76.140_udp@PTR;check="$$(dig +tcp +noall +answer +search 140.76.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.76.140_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 14:40:50.622: INFO: Unable to read wheezy_udp@dns-test-service.dns-3090.svc.cluster.local from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.633: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3090.svc.cluster.local from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.639: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3090.svc.cluster.local from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.644: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3090.svc.cluster.local from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.648: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-3090.svc.cluster.local from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.653: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-3090.svc.cluster.local from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.658: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.662: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.666: INFO: Unable to read 10.101.76.140_udp@PTR from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.670: INFO: Unable to read 10.101.76.140_tcp@PTR from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.674: INFO: Unable to read jessie_udp@dns-test-service.dns-3090.svc.cluster.local from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.678: INFO: Unable to read jessie_tcp@dns-test-service.dns-3090.svc.cluster.local from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.685: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3090.svc.cluster.local from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.697: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3090.svc.cluster.local from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.700: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-3090.svc.cluster.local from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.706: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-3090.svc.cluster.local from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.712: INFO: Unable to read jessie_udp@PodARecord from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.746: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.749: INFO: Unable to read 10.101.76.140_udp@PTR from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.753: INFO: Unable to read 10.101.76.140_tcp@PTR from pod dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2: the server could not find the requested resource (get pods dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2)
Jan 31 14:40:50.753: INFO: Lookups using dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2 failed for: [wheezy_udp@dns-test-service.dns-3090.svc.cluster.local wheezy_tcp@dns-test-service.dns-3090.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3090.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3090.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-3090.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-3090.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.101.76.140_udp@PTR 10.101.76.140_tcp@PTR jessie_udp@dns-test-service.dns-3090.svc.cluster.local jessie_tcp@dns-test-service.dns-3090.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3090.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3090.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-3090.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-3090.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.101.76.140_udp@PTR 10.101.76.140_tcp@PTR]

Jan 31 14:40:55.920: INFO: DNS probes using dns-3090/dns-test-e63b6eba-180a-4792-b085-3425f0a4cfc2 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:40:56.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3090" for this suite.
Jan 31 14:41:04.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:41:04.645: INFO: namespace dns-3090 deletion completed in 8.258123364s

• [SLOW TEST:28.451 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:41:04.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 31 14:41:04.781: INFO: Waiting up to 5m0s for pod "pod-3d8e4bfe-c366-45ca-8fa2-0a9b73eef2f7" in namespace "emptydir-7406" to be "success or failure"
Jan 31 14:41:04.785: INFO: Pod "pod-3d8e4bfe-c366-45ca-8fa2-0a9b73eef2f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.417143ms
Jan 31 14:41:06.794: INFO: Pod "pod-3d8e4bfe-c366-45ca-8fa2-0a9b73eef2f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012853132s
Jan 31 14:41:08.799: INFO: Pod "pod-3d8e4bfe-c366-45ca-8fa2-0a9b73eef2f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018412754s
Jan 31 14:41:10.808: INFO: Pod "pod-3d8e4bfe-c366-45ca-8fa2-0a9b73eef2f7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027382852s
Jan 31 14:41:12.830: INFO: Pod "pod-3d8e4bfe-c366-45ca-8fa2-0a9b73eef2f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049355935s
STEP: Saw pod success
Jan 31 14:41:12.831: INFO: Pod "pod-3d8e4bfe-c366-45ca-8fa2-0a9b73eef2f7" satisfied condition "success or failure"
Jan 31 14:41:12.837: INFO: Trying to get logs from node iruya-node pod pod-3d8e4bfe-c366-45ca-8fa2-0a9b73eef2f7 container test-container: 
STEP: delete the pod
Jan 31 14:41:12.982: INFO: Waiting for pod pod-3d8e4bfe-c366-45ca-8fa2-0a9b73eef2f7 to disappear
Jan 31 14:41:12.995: INFO: Pod pod-3d8e4bfe-c366-45ca-8fa2-0a9b73eef2f7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:41:12.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7406" for this suite.
Jan 31 14:41:19.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:41:19.158: INFO: namespace emptydir-7406 deletion completed in 6.157070989s

• [SLOW TEST:14.511 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:41:19.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 31 14:41:28.001: INFO: Successfully updated pod "pod-update-67957e56-a97a-4ca5-acaf-1225d6680102"
STEP: verifying the updated pod is in kubernetes
Jan 31 14:41:28.036: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:41:28.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-136" for this suite.
Jan 31 14:41:50.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:41:50.199: INFO: namespace pods-136 deletion completed in 22.159619978s

• [SLOW TEST:31.041 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:41:50.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 31 14:41:50.439: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4676,SelfLink:/api/v1/namespaces/watch-4676/configmaps/e2e-watch-test-resource-version,UID:08242830-f06b-4eaf-a034-f7e3084ee6ba,ResourceVersion:22576388,Generation:0,CreationTimestamp:2020-01-31 14:41:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 31 14:41:50.439: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4676,SelfLink:/api/v1/namespaces/watch-4676/configmaps/e2e-watch-test-resource-version,UID:08242830-f06b-4eaf-a034-f7e3084ee6ba,ResourceVersion:22576389,Generation:0,CreationTimestamp:2020-01-31 14:41:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:41:50.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4676" for this suite.
Jan 31 14:41:56.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:41:56.625: INFO: namespace watch-4676 deletion completed in 6.177575147s

• [SLOW TEST:6.425 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:41:56.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6565.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6565.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6565.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6565.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6565.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6565.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 31 14:42:08.893: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6565/dns-test-47ffc366-96ee-40e1-b957-480582c49dad: the server could not find the requested resource (get pods dns-test-47ffc366-96ee-40e1-b957-480582c49dad)
Jan 31 14:42:08.910: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6565/dns-test-47ffc366-96ee-40e1-b957-480582c49dad: the server could not find the requested resource (get pods dns-test-47ffc366-96ee-40e1-b957-480582c49dad)
Jan 31 14:42:08.921: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-6565.svc.cluster.local from pod dns-6565/dns-test-47ffc366-96ee-40e1-b957-480582c49dad: the server could not find the requested resource (get pods dns-test-47ffc366-96ee-40e1-b957-480582c49dad)
Jan 31 14:42:08.932: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-6565/dns-test-47ffc366-96ee-40e1-b957-480582c49dad: the server could not find the requested resource (get pods dns-test-47ffc366-96ee-40e1-b957-480582c49dad)
Jan 31 14:42:08.941: INFO: Unable to read jessie_udp@PodARecord from pod dns-6565/dns-test-47ffc366-96ee-40e1-b957-480582c49dad: the server could not find the requested resource (get pods dns-test-47ffc366-96ee-40e1-b957-480582c49dad)
Jan 31 14:42:08.946: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6565/dns-test-47ffc366-96ee-40e1-b957-480582c49dad: the server could not find the requested resource (get pods dns-test-47ffc366-96ee-40e1-b957-480582c49dad)
Jan 31 14:42:08.946: INFO: Lookups using dns-6565/dns-test-47ffc366-96ee-40e1-b957-480582c49dad failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-6565.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 31 14:42:14.008: INFO: DNS probes using dns-6565/dns-test-47ffc366-96ee-40e1-b957-480582c49dad succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:42:14.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6565" for this suite.
Jan 31 14:42:20.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:42:20.899: INFO: namespace dns-6565 deletion completed in 6.286618326s

• [SLOW TEST:24.273 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:42:20.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 31 14:42:21.095: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 31 14:42:26.104: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:42:27.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6469" for this suite.
Jan 31 14:42:33.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:42:33.369: INFO: namespace replication-controller-6469 deletion completed in 6.213473196s

• [SLOW TEST:12.469 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:42:33.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 31 14:42:33.533: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:42:34.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8410" for this suite.
Jan 31 14:42:40.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:42:40.973: INFO: namespace custom-resource-definition-8410 deletion completed in 6.204357279s

• [SLOW TEST:7.604 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:42:40.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 31 14:42:41.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6090'
Jan 31 14:42:41.617: INFO: stderr: ""
Jan 31 14:42:41.618: INFO: stdout: "replicationcontroller/redis-master created\n"
Jan 31 14:42:41.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6090'
Jan 31 14:42:42.193: INFO: stderr: ""
Jan 31 14:42:42.194: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 31 14:42:43.292: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:42:43.293: INFO: Found 0 / 1
Jan 31 14:42:44.206: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:42:44.206: INFO: Found 0 / 1
Jan 31 14:42:45.215: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:42:45.215: INFO: Found 0 / 1
Jan 31 14:42:46.201: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:42:46.201: INFO: Found 0 / 1
Jan 31 14:42:47.203: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:42:47.204: INFO: Found 0 / 1
Jan 31 14:42:48.200: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:42:48.200: INFO: Found 0 / 1
Jan 31 14:42:49.201: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:42:49.201: INFO: Found 1 / 1
Jan 31 14:42:49.201: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 31 14:42:49.207: INFO: Selector matched 1 pods for map[app:redis]
Jan 31 14:42:49.207: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 31 14:42:49.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-dmn65 --namespace=kubectl-6090'
Jan 31 14:42:49.486: INFO: stderr: ""
Jan 31 14:42:49.487: INFO: stdout: "Name:           redis-master-dmn65\nNamespace:      kubectl-6090\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Fri, 31 Jan 2020 14:42:41 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://9afb4adbd4237309c1fe887e66eb1775d616e4ede4b916a6668e20575c61be50\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 31 Jan 2020 14:42:48 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8tjdg (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-8tjdg:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-8tjdg\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  8s    default-scheduler    Successfully assigned kubectl-6090/redis-master-dmn65 to iruya-node\n  Normal  Pulled     4s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Jan 31 14:42:49.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-6090'
Jan 31 14:42:49.631: INFO: stderr: ""
Jan 31 14:42:49.631: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-6090\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: redis-master-dmn65\n"
Jan 31 14:42:49.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-6090'
Jan 31 14:42:49.748: INFO: stderr: ""
Jan 31 14:42:49.748: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-6090\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.97.115.237\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan 31 14:42:49.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Jan 31 14:42:49.875: INFO: stderr: ""
Jan 31 14:42:49.875: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Fri, 31 Jan 2020 14:42:42 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 31 Jan 2020 14:42:42 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 31 Jan 2020 14:42:42 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 31 Jan 2020 14:42:42 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         180d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         111d\n  kubectl-6090               redis-master-dmn65    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan 31 14:42:49.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6090'
Jan 31 14:42:50.025: INFO: stderr: ""
Jan 31 14:42:50.025: INFO: stdout: "Name:         kubectl-6090\nLabels:       e2e-framework=kubectl\n              e2e-run=f500b07c-89f9-4588-b07d-6f1b18ca7724\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:42:50.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6090" for this suite.
Jan 31 14:43:12.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:43:12.172: INFO: namespace kubectl-6090 deletion completed in 22.135525608s

• [SLOW TEST:31.198 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:43:12.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:43:12.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5270" for this suite.
Jan 31 14:43:18.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:43:18.517: INFO: namespace services-5270 deletion completed in 6.159486465s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.345 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:43:18.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-b565e801-2d43-4f1d-9ffe-ed1c964e384f
STEP: Creating a pod to test consume configMaps
Jan 31 14:43:19.094: INFO: Waiting up to 5m0s for pod "pod-configmaps-486e2b61-b544-44c1-b4c6-765f53b11578" in namespace "configmap-5441" to be "success or failure"
Jan 31 14:43:19.163: INFO: Pod "pod-configmaps-486e2b61-b544-44c1-b4c6-765f53b11578": Phase="Pending", Reason="", readiness=false. Elapsed: 68.126447ms
Jan 31 14:43:21.173: INFO: Pod "pod-configmaps-486e2b61-b544-44c1-b4c6-765f53b11578": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078088352s
Jan 31 14:43:23.186: INFO: Pod "pod-configmaps-486e2b61-b544-44c1-b4c6-765f53b11578": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091196545s
Jan 31 14:43:25.204: INFO: Pod "pod-configmaps-486e2b61-b544-44c1-b4c6-765f53b11578": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109024767s
Jan 31 14:43:27.229: INFO: Pod "pod-configmaps-486e2b61-b544-44c1-b4c6-765f53b11578": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134451847s
Jan 31 14:43:29.242: INFO: Pod "pod-configmaps-486e2b61-b544-44c1-b4c6-765f53b11578": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.147224973s
STEP: Saw pod success
Jan 31 14:43:29.242: INFO: Pod "pod-configmaps-486e2b61-b544-44c1-b4c6-765f53b11578" satisfied condition "success or failure"
Jan 31 14:43:29.246: INFO: Trying to get logs from node iruya-node pod pod-configmaps-486e2b61-b544-44c1-b4c6-765f53b11578 container configmap-volume-test: 
STEP: delete the pod
Jan 31 14:43:29.309: INFO: Waiting for pod pod-configmaps-486e2b61-b544-44c1-b4c6-765f53b11578 to disappear
Jan 31 14:43:29.322: INFO: Pod pod-configmaps-486e2b61-b544-44c1-b4c6-765f53b11578 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:43:29.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5441" for this suite.
Jan 31 14:43:35.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:43:35.525: INFO: namespace configmap-5441 deletion completed in 6.190793915s

• [SLOW TEST:17.006 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:43:35.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-7e190f20-96c6-422f-ae89-9bdaf34183c0
STEP: Creating a pod to test consume secrets
Jan 31 14:43:35.638: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e49397b7-87da-484d-8b78-3f860f32b9c6" in namespace "projected-4947" to be "success or failure"
Jan 31 14:43:35.642: INFO: Pod "pod-projected-secrets-e49397b7-87da-484d-8b78-3f860f32b9c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.620804ms
Jan 31 14:43:37.658: INFO: Pod "pod-projected-secrets-e49397b7-87da-484d-8b78-3f860f32b9c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02004819s
Jan 31 14:43:39.669: INFO: Pod "pod-projected-secrets-e49397b7-87da-484d-8b78-3f860f32b9c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03034361s
Jan 31 14:43:41.677: INFO: Pod "pod-projected-secrets-e49397b7-87da-484d-8b78-3f860f32b9c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038489605s
Jan 31 14:43:43.684: INFO: Pod "pod-projected-secrets-e49397b7-87da-484d-8b78-3f860f32b9c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046133001s
STEP: Saw pod success
Jan 31 14:43:43.685: INFO: Pod "pod-projected-secrets-e49397b7-87da-484d-8b78-3f860f32b9c6" satisfied condition "success or failure"
Jan 31 14:43:43.688: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-e49397b7-87da-484d-8b78-3f860f32b9c6 container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 14:43:43.761: INFO: Waiting for pod pod-projected-secrets-e49397b7-87da-484d-8b78-3f860f32b9c6 to disappear
Jan 31 14:43:43.770: INFO: Pod pod-projected-secrets-e49397b7-87da-484d-8b78-3f860f32b9c6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:43:43.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4947" for this suite.
Jan 31 14:43:49.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:43:50.029: INFO: namespace projected-4947 deletion completed in 6.250862671s

• [SLOW TEST:14.504 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:43:50.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 31 14:43:50.134: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0cade6a-130b-4e9e-8b5f-de412aab71fa" in namespace "downward-api-2672" to be "success or failure"
Jan 31 14:43:50.179: INFO: Pod "downwardapi-volume-c0cade6a-130b-4e9e-8b5f-de412aab71fa": Phase="Pending", Reason="", readiness=false. Elapsed: 44.381185ms
Jan 31 14:43:52.187: INFO: Pod "downwardapi-volume-c0cade6a-130b-4e9e-8b5f-de412aab71fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052129846s
Jan 31 14:43:54.203: INFO: Pod "downwardapi-volume-c0cade6a-130b-4e9e-8b5f-de412aab71fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068472319s
Jan 31 14:43:56.252: INFO: Pod "downwardapi-volume-c0cade6a-130b-4e9e-8b5f-de412aab71fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117444058s
Jan 31 14:43:58.262: INFO: Pod "downwardapi-volume-c0cade6a-130b-4e9e-8b5f-de412aab71fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.127750269s
STEP: Saw pod success
Jan 31 14:43:58.262: INFO: Pod "downwardapi-volume-c0cade6a-130b-4e9e-8b5f-de412aab71fa" satisfied condition "success or failure"
Jan 31 14:43:58.265: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c0cade6a-130b-4e9e-8b5f-de412aab71fa container client-container: 
STEP: delete the pod
Jan 31 14:43:58.348: INFO: Waiting for pod downwardapi-volume-c0cade6a-130b-4e9e-8b5f-de412aab71fa to disappear
Jan 31 14:43:58.355: INFO: Pod downwardapi-volume-c0cade6a-130b-4e9e-8b5f-de412aab71fa no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:43:58.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2672" for this suite.
Jan 31 14:44:04.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:44:04.593: INFO: namespace downward-api-2672 deletion completed in 6.229831639s

• [SLOW TEST:14.563 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:44:04.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-5759
I0131 14:44:04.778053       9 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5759, replica count: 1
I0131 14:44:05.829632       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 14:44:06.830398       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 14:44:07.830995       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 14:44:08.831926       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 14:44:09.832712       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 14:44:10.833672       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 14:44:11.835218       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 14:44:12.836039       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 31 14:44:13.005: INFO: Created: latency-svc-xms6k
Jan 31 14:44:13.109: INFO: Got endpoints: latency-svc-xms6k [172.679915ms]
Jan 31 14:44:13.162: INFO: Created: latency-svc-px8wh
Jan 31 14:44:13.168: INFO: Got endpoints: latency-svc-px8wh [58.238974ms]
Jan 31 14:44:13.202: INFO: Created: latency-svc-jbzl2
Jan 31 14:44:13.275: INFO: Got endpoints: latency-svc-jbzl2 [165.727881ms]
Jan 31 14:44:13.322: INFO: Created: latency-svc-n7cdq
Jan 31 14:44:13.322: INFO: Got endpoints: latency-svc-n7cdq [211.172609ms]
Jan 31 14:44:13.458: INFO: Created: latency-svc-qxsjx
Jan 31 14:44:13.469: INFO: Got endpoints: latency-svc-qxsjx [358.715599ms]
Jan 31 14:44:13.506: INFO: Created: latency-svc-lv2vx
Jan 31 14:44:13.519: INFO: Got endpoints: latency-svc-lv2vx [407.48191ms]
Jan 31 14:44:13.555: INFO: Created: latency-svc-8tpsb
Jan 31 14:44:13.636: INFO: Got endpoints: latency-svc-8tpsb [524.880022ms]
Jan 31 14:44:13.666: INFO: Created: latency-svc-zsgqj
Jan 31 14:44:13.719: INFO: Created: latency-svc-sc4rj
Jan 31 14:44:13.719: INFO: Got endpoints: latency-svc-zsgqj [607.990446ms]
Jan 31 14:44:13.724: INFO: Got endpoints: latency-svc-sc4rj [614.14738ms]
Jan 31 14:44:13.851: INFO: Created: latency-svc-dqzff
Jan 31 14:44:13.876: INFO: Got endpoints: latency-svc-dqzff [764.672173ms]
Jan 31 14:44:13.977: INFO: Created: latency-svc-kb75z
Jan 31 14:44:13.979: INFO: Got endpoints: latency-svc-kb75z [867.870527ms]
Jan 31 14:44:14.189: INFO: Created: latency-svc-wl7mb
Jan 31 14:44:14.193: INFO: Got endpoints: latency-svc-wl7mb [1.081611315s]
Jan 31 14:44:14.241: INFO: Created: latency-svc-twpnk
Jan 31 14:44:14.247: INFO: Got endpoints: latency-svc-twpnk [1.13555699s]
Jan 31 14:44:14.368: INFO: Created: latency-svc-cg7qv
Jan 31 14:44:14.368: INFO: Got endpoints: latency-svc-cg7qv [1.25671048s]
Jan 31 14:44:14.434: INFO: Created: latency-svc-2jpwc
Jan 31 14:44:14.446: INFO: Got endpoints: latency-svc-2jpwc [1.335043584s]
Jan 31 14:44:14.552: INFO: Created: latency-svc-ftgzz
Jan 31 14:44:14.559: INFO: Got endpoints: latency-svc-ftgzz [1.448116337s]
Jan 31 14:44:14.602: INFO: Created: latency-svc-lnqf5
Jan 31 14:44:14.609: INFO: Got endpoints: latency-svc-lnqf5 [1.441343099s]
Jan 31 14:44:14.700: INFO: Created: latency-svc-6fjhn
Jan 31 14:44:14.706: INFO: Got endpoints: latency-svc-6fjhn [1.430116681s]
Jan 31 14:44:14.764: INFO: Created: latency-svc-ckhnr
Jan 31 14:44:14.772: INFO: Got endpoints: latency-svc-ckhnr [1.450093736s]
Jan 31 14:44:14.892: INFO: Created: latency-svc-jcq6s
Jan 31 14:44:14.893: INFO: Got endpoints: latency-svc-jcq6s [1.424015366s]
Jan 31 14:44:14.966: INFO: Created: latency-svc-5jkqp
Jan 31 14:44:14.978: INFO: Got endpoints: latency-svc-5jkqp [1.459389015s]
Jan 31 14:44:15.095: INFO: Created: latency-svc-s6qkf
Jan 31 14:44:15.102: INFO: Got endpoints: latency-svc-s6qkf [1.465170034s]
Jan 31 14:44:15.144: INFO: Created: latency-svc-w94k8
Jan 31 14:44:15.159: INFO: Got endpoints: latency-svc-w94k8 [1.439522597s]
Jan 31 14:44:15.265: INFO: Created: latency-svc-g72qt
Jan 31 14:44:15.276: INFO: Got endpoints: latency-svc-g72qt [1.551084501s]
Jan 31 14:44:15.328: INFO: Created: latency-svc-mdscn
Jan 31 14:44:15.345: INFO: Got endpoints: latency-svc-mdscn [1.468150755s]
Jan 31 14:44:15.468: INFO: Created: latency-svc-dpbdg
Jan 31 14:44:15.477: INFO: Got endpoints: latency-svc-dpbdg [1.498135512s]
Jan 31 14:44:15.529: INFO: Created: latency-svc-zfbj7
Jan 31 14:44:15.644: INFO: Got endpoints: latency-svc-zfbj7 [1.450843646s]
Jan 31 14:44:15.646: INFO: Created: latency-svc-zlmg9
Jan 31 14:44:15.686: INFO: Got endpoints: latency-svc-zlmg9 [1.439542059s]
Jan 31 14:44:15.692: INFO: Created: latency-svc-mg4fk
Jan 31 14:44:15.695: INFO: Got endpoints: latency-svc-mg4fk [1.32697138s]
Jan 31 14:44:15.747: INFO: Created: latency-svc-4s9mn
Jan 31 14:44:15.825: INFO: Got endpoints: latency-svc-4s9mn [1.378990144s]
Jan 31 14:44:15.837: INFO: Created: latency-svc-4k6c5
Jan 31 14:44:15.855: INFO: Got endpoints: latency-svc-4k6c5 [1.295445822s]
Jan 31 14:44:15.905: INFO: Created: latency-svc-mgfwk
Jan 31 14:44:16.006: INFO: Got endpoints: latency-svc-mgfwk [1.396491503s]
Jan 31 14:44:16.065: INFO: Created: latency-svc-7g9vk
Jan 31 14:44:16.099: INFO: Got endpoints: latency-svc-7g9vk [1.392593899s]
Jan 31 14:44:16.309: INFO: Created: latency-svc-4qm7w
Jan 31 14:44:16.367: INFO: Got endpoints: latency-svc-4qm7w [1.594405471s]
Jan 31 14:44:16.372: INFO: Created: latency-svc-29m4j
Jan 31 14:44:16.381: INFO: Got endpoints: latency-svc-29m4j [281.455243ms]
Jan 31 14:44:16.504: INFO: Created: latency-svc-9bhgq
Jan 31 14:44:16.516: INFO: Got endpoints: latency-svc-9bhgq [1.62335154s]
Jan 31 14:44:16.554: INFO: Created: latency-svc-26dlt
Jan 31 14:44:16.563: INFO: Got endpoints: latency-svc-26dlt [1.584958137s]
Jan 31 14:44:16.664: INFO: Created: latency-svc-dtn2q
Jan 31 14:44:16.672: INFO: Got endpoints: latency-svc-dtn2q [1.568987806s]
Jan 31 14:44:16.714: INFO: Created: latency-svc-2bd4t
Jan 31 14:44:16.719: INFO: Got endpoints: latency-svc-2bd4t [1.559552417s]
Jan 31 14:44:16.761: INFO: Created: latency-svc-4bpnt
Jan 31 14:44:16.833: INFO: Got endpoints: latency-svc-4bpnt [1.557410558s]
Jan 31 14:44:16.846: INFO: Created: latency-svc-ggrwb
Jan 31 14:44:16.857: INFO: Got endpoints: latency-svc-ggrwb [1.510945183s]
Jan 31 14:44:16.927: INFO: Created: latency-svc-l6n2z
Jan 31 14:44:17.009: INFO: Got endpoints: latency-svc-l6n2z [1.53111224s]
Jan 31 14:44:17.022: INFO: Created: latency-svc-dbjvb
Jan 31 14:44:17.027: INFO: Got endpoints: latency-svc-dbjvb [1.382589865s]
Jan 31 14:44:17.081: INFO: Created: latency-svc-f2787
Jan 31 14:44:17.171: INFO: Got endpoints: latency-svc-f2787 [1.48353843s]
Jan 31 14:44:17.199: INFO: Created: latency-svc-nmbxh
Jan 31 14:44:17.205: INFO: Got endpoints: latency-svc-nmbxh [1.509861718s]
Jan 31 14:44:17.301: INFO: Created: latency-svc-gfxnh
Jan 31 14:44:17.310: INFO: Got endpoints: latency-svc-gfxnh [1.484156242s]
Jan 31 14:44:17.351: INFO: Created: latency-svc-8cv6g
Jan 31 14:44:17.364: INFO: Got endpoints: latency-svc-8cv6g [1.508052075s]
Jan 31 14:44:17.477: INFO: Created: latency-svc-xnvld
Jan 31 14:44:17.514: INFO: Got endpoints: latency-svc-xnvld [1.50751372s]
Jan 31 14:44:17.519: INFO: Created: latency-svc-s5n9p
Jan 31 14:44:17.526: INFO: Got endpoints: latency-svc-s5n9p [1.157945102s]
Jan 31 14:44:17.558: INFO: Created: latency-svc-r4md4
Jan 31 14:44:17.572: INFO: Got endpoints: latency-svc-r4md4 [1.19035265s]
Jan 31 14:44:17.640: INFO: Created: latency-svc-455sq
Jan 31 14:44:17.654: INFO: Got endpoints: latency-svc-455sq [1.137449216s]
Jan 31 14:44:17.817: INFO: Created: latency-svc-dktcl
Jan 31 14:44:17.820: INFO: Got endpoints: latency-svc-dktcl [1.256547643s]
Jan 31 14:44:17.869: INFO: Created: latency-svc-4rlh2
Jan 31 14:44:17.874: INFO: Got endpoints: latency-svc-4rlh2 [1.202044238s]
Jan 31 14:44:17.978: INFO: Created: latency-svc-2jjjl
Jan 31 14:44:17.985: INFO: Got endpoints: latency-svc-2jjjl [1.265132138s]
Jan 31 14:44:18.029: INFO: Created: latency-svc-ztzhv
Jan 31 14:44:18.043: INFO: Got endpoints: latency-svc-ztzhv [1.208994538s]
Jan 31 14:44:18.145: INFO: Created: latency-svc-kgzp6
Jan 31 14:44:18.149: INFO: Got endpoints: latency-svc-kgzp6 [1.291901571s]
Jan 31 14:44:18.243: INFO: Created: latency-svc-6grbh
Jan 31 14:44:18.318: INFO: Got endpoints: latency-svc-6grbh [1.308708864s]
Jan 31 14:44:18.346: INFO: Created: latency-svc-wdwxx
Jan 31 14:44:18.364: INFO: Got endpoints: latency-svc-wdwxx [1.336070029s]
Jan 31 14:44:18.494: INFO: Created: latency-svc-6ghlq
Jan 31 14:44:18.502: INFO: Got endpoints: latency-svc-6ghlq [1.330627681s]
Jan 31 14:44:18.582: INFO: Created: latency-svc-dttgw
Jan 31 14:44:18.588: INFO: Got endpoints: latency-svc-dttgw [1.382542582s]
Jan 31 14:44:18.670: INFO: Created: latency-svc-fwv89
Jan 31 14:44:18.677: INFO: Got endpoints: latency-svc-fwv89 [1.366650954s]
Jan 31 14:44:18.717: INFO: Created: latency-svc-6zzfb
Jan 31 14:44:18.734: INFO: Got endpoints: latency-svc-6zzfb [1.369342542s]
Jan 31 14:44:18.829: INFO: Created: latency-svc-jgskw
Jan 31 14:44:18.831: INFO: Got endpoints: latency-svc-jgskw [1.316882882s]
Jan 31 14:44:18.877: INFO: Created: latency-svc-7vq2k
Jan 31 14:44:18.886: INFO: Got endpoints: latency-svc-7vq2k [1.359977381s]
Jan 31 14:44:18.924: INFO: Created: latency-svc-kfgtt
Jan 31 14:44:19.006: INFO: Got endpoints: latency-svc-kfgtt [1.434103806s]
Jan 31 14:44:19.040: INFO: Created: latency-svc-wlmb8
Jan 31 14:44:19.047: INFO: Got endpoints: latency-svc-wlmb8 [1.392034348s]
Jan 31 14:44:19.106: INFO: Created: latency-svc-9pns9
Jan 31 14:44:19.106: INFO: Got endpoints: latency-svc-9pns9 [1.285474857s]
Jan 31 14:44:19.202: INFO: Created: latency-svc-wgh4p
Jan 31 14:44:19.210: INFO: Got endpoints: latency-svc-wgh4p [1.335471453s]
Jan 31 14:44:19.266: INFO: Created: latency-svc-q84b5
Jan 31 14:44:19.283: INFO: Got endpoints: latency-svc-q84b5 [1.298636898s]
Jan 31 14:44:19.421: INFO: Created: latency-svc-jz4h2
Jan 31 14:44:19.431: INFO: Got endpoints: latency-svc-jz4h2 [1.38814819s]
Jan 31 14:44:19.467: INFO: Created: latency-svc-6mhrs
Jan 31 14:44:19.473: INFO: Got endpoints: latency-svc-6mhrs [1.324314595s]
Jan 31 14:44:19.506: INFO: Created: latency-svc-jq9zg
Jan 31 14:44:19.585: INFO: Got endpoints: latency-svc-jq9zg [1.266243915s]
Jan 31 14:44:19.594: INFO: Created: latency-svc-pz6c6
Jan 31 14:44:19.599: INFO: Got endpoints: latency-svc-pz6c6 [1.235293815s]
Jan 31 14:44:19.639: INFO: Created: latency-svc-7sngl
Jan 31 14:44:19.646: INFO: Got endpoints: latency-svc-7sngl [1.143792552s]
Jan 31 14:44:19.754: INFO: Created: latency-svc-w5jbk
Jan 31 14:44:19.760: INFO: Got endpoints: latency-svc-w5jbk [1.171662015s]
Jan 31 14:44:19.796: INFO: Created: latency-svc-fhtmt
Jan 31 14:44:19.812: INFO: Got endpoints: latency-svc-fhtmt [1.134384823s]
Jan 31 14:44:19.916: INFO: Created: latency-svc-wclhh
Jan 31 14:44:19.919: INFO: Got endpoints: latency-svc-wclhh [1.184792587s]
Jan 31 14:44:19.971: INFO: Created: latency-svc-g62wn
Jan 31 14:44:19.987: INFO: Got endpoints: latency-svc-g62wn [1.155193229s]
Jan 31 14:44:20.108: INFO: Created: latency-svc-g9hhv
Jan 31 14:44:20.121: INFO: Got endpoints: latency-svc-g9hhv [1.234429574s]
Jan 31 14:44:20.157: INFO: Created: latency-svc-tzp9v
Jan 31 14:44:20.168: INFO: Got endpoints: latency-svc-tzp9v [1.161621377s]
Jan 31 14:44:20.268: INFO: Created: latency-svc-cvcbk
Jan 31 14:44:20.281: INFO: Got endpoints: latency-svc-cvcbk [1.233995374s]
Jan 31 14:44:20.354: INFO: Created: latency-svc-hr996
Jan 31 14:44:20.495: INFO: Created: latency-svc-kq77f
Jan 31 14:44:20.495: INFO: Got endpoints: latency-svc-hr996 [1.388633136s]
Jan 31 14:44:20.585: INFO: Created: latency-svc-wplnn
Jan 31 14:44:20.585: INFO: Got endpoints: latency-svc-kq77f [1.375518902s]
Jan 31 14:44:20.712: INFO: Got endpoints: latency-svc-wplnn [1.428633741s]
Jan 31 14:44:20.762: INFO: Created: latency-svc-gf4pk
Jan 31 14:44:20.935: INFO: Got endpoints: latency-svc-gf4pk [1.503832959s]
Jan 31 14:44:20.938: INFO: Created: latency-svc-zj82d
Jan 31 14:44:20.946: INFO: Got endpoints: latency-svc-zj82d [1.472624189s]
Jan 31 14:44:21.007: INFO: Created: latency-svc-6bfmc
Jan 31 14:44:21.015: INFO: Got endpoints: latency-svc-6bfmc [1.429641447s]
Jan 31 14:44:21.215: INFO: Created: latency-svc-5kfgn
Jan 31 14:44:21.224: INFO: Got endpoints: latency-svc-5kfgn [1.624943349s]
Jan 31 14:44:21.280: INFO: Created: latency-svc-89k94
Jan 31 14:44:21.281: INFO: Got endpoints: latency-svc-89k94 [1.635010718s]
Jan 31 14:44:21.423: INFO: Created: latency-svc-hczvb
Jan 31 14:44:21.430: INFO: Got endpoints: latency-svc-hczvb [1.669444663s]
Jan 31 14:44:21.472: INFO: Created: latency-svc-tshwn
Jan 31 14:44:21.488: INFO: Got endpoints: latency-svc-tshwn [1.676690596s]
Jan 31 14:44:21.610: INFO: Created: latency-svc-s5gr4
Jan 31 14:44:21.662: INFO: Created: latency-svc-6x2g8
Jan 31 14:44:21.663: INFO: Got endpoints: latency-svc-s5gr4 [1.74396462s]
Jan 31 14:44:21.697: INFO: Got endpoints: latency-svc-6x2g8 [1.71018146s]
Jan 31 14:44:21.806: INFO: Created: latency-svc-kr4lc
Jan 31 14:44:21.823: INFO: Got endpoints: latency-svc-kr4lc [1.702331723s]
Jan 31 14:44:21.869: INFO: Created: latency-svc-6xct2
Jan 31 14:44:21.875: INFO: Got endpoints: latency-svc-6xct2 [1.706440817s]
Jan 31 14:44:22.037: INFO: Created: latency-svc-cr5d4
Jan 31 14:44:22.040: INFO: Got endpoints: latency-svc-cr5d4 [1.759532506s]
Jan 31 14:44:22.125: INFO: Created: latency-svc-dqsnq
Jan 31 14:44:22.304: INFO: Got endpoints: latency-svc-dqsnq [1.808596337s]
Jan 31 14:44:22.366: INFO: Created: latency-svc-dc28l
Jan 31 14:44:22.381: INFO: Got endpoints: latency-svc-dc28l [1.795634418s]
Jan 31 14:44:22.576: INFO: Created: latency-svc-gk9hz
Jan 31 14:44:22.643: INFO: Got endpoints: latency-svc-gk9hz [1.929963626s]
Jan 31 14:44:22.650: INFO: Created: latency-svc-wxbbv
Jan 31 14:44:22.653: INFO: Got endpoints: latency-svc-wxbbv [1.716843415s]
Jan 31 14:44:22.793: INFO: Created: latency-svc-cx4hc
Jan 31 14:44:22.807: INFO: Got endpoints: latency-svc-cx4hc [1.860120267s]
Jan 31 14:44:22.864: INFO: Created: latency-svc-8wn67
Jan 31 14:44:22.876: INFO: Got endpoints: latency-svc-8wn67 [1.860860674s]
Jan 31 14:44:22.977: INFO: Created: latency-svc-hdkb9
Jan 31 14:44:22.994: INFO: Got endpoints: latency-svc-hdkb9 [1.769511288s]
Jan 31 14:44:23.035: INFO: Created: latency-svc-9xkhg
Jan 31 14:44:23.054: INFO: Got endpoints: latency-svc-9xkhg [1.772241196s]
Jan 31 14:44:23.271: INFO: Created: latency-svc-p9fp2
Jan 31 14:44:23.284: INFO: Got endpoints: latency-svc-p9fp2 [1.853881893s]
Jan 31 14:44:23.323: INFO: Created: latency-svc-hsbr7
Jan 31 14:44:23.339: INFO: Got endpoints: latency-svc-hsbr7 [1.850278288s]
Jan 31 14:44:23.924: INFO: Created: latency-svc-2qngt
Jan 31 14:44:23.964: INFO: Got endpoints: latency-svc-2qngt [2.300741006s]
Jan 31 14:44:23.970: INFO: Created: latency-svc-hfxcx
Jan 31 14:44:24.092: INFO: Got endpoints: latency-svc-hfxcx [2.394675192s]
Jan 31 14:44:24.109: INFO: Created: latency-svc-hkbj4
Jan 31 14:44:24.138: INFO: Got endpoints: latency-svc-hkbj4 [2.314311021s]
Jan 31 14:44:24.172: INFO: Created: latency-svc-jxl8f
Jan 31 14:44:24.254: INFO: Created: latency-svc-8pnjr
Jan 31 14:44:24.259: INFO: Got endpoints: latency-svc-jxl8f [2.384317371s]
Jan 31 14:44:24.310: INFO: Got endpoints: latency-svc-8pnjr [2.268762518s]
Jan 31 14:44:24.329: INFO: Created: latency-svc-bqwbn
Jan 31 14:44:24.329: INFO: Got endpoints: latency-svc-bqwbn [2.024522109s]
Jan 31 14:44:24.510: INFO: Created: latency-svc-8bm6l
Jan 31 14:44:24.515: INFO: Got endpoints: latency-svc-8bm6l [2.132222368s]
Jan 31 14:44:24.731: INFO: Created: latency-svc-b7vwk
Jan 31 14:44:24.811: INFO: Got endpoints: latency-svc-b7vwk [2.167591629s]
Jan 31 14:44:24.821: INFO: Created: latency-svc-j4px5
Jan 31 14:44:24.977: INFO: Got endpoints: latency-svc-j4px5 [2.323680449s]
Jan 31 14:44:24.994: INFO: Created: latency-svc-98n87
Jan 31 14:44:25.012: INFO: Got endpoints: latency-svc-98n87 [2.204474382s]
Jan 31 14:44:25.172: INFO: Created: latency-svc-qdr8d
Jan 31 14:44:25.180: INFO: Got endpoints: latency-svc-qdr8d [2.303576063s]
Jan 31 14:44:25.260: INFO: Created: latency-svc-7ngcv
Jan 31 14:44:25.395: INFO: Got endpoints: latency-svc-7ngcv [2.400689824s]
Jan 31 14:44:25.432: INFO: Created: latency-svc-4fpk6
Jan 31 14:44:25.444: INFO: Got endpoints: latency-svc-4fpk6 [2.389147926s]
Jan 31 14:44:25.617: INFO: Created: latency-svc-l95ht
Jan 31 14:44:25.633: INFO: Got endpoints: latency-svc-l95ht [2.349025695s]
Jan 31 14:44:25.689: INFO: Created: latency-svc-6s5rt
Jan 31 14:44:25.707: INFO: Got endpoints: latency-svc-6s5rt [2.367967535s]
Jan 31 14:44:25.847: INFO: Created: latency-svc-5xxcp
Jan 31 14:44:25.866: INFO: Got endpoints: latency-svc-5xxcp [1.901558665s]
Jan 31 14:44:25.924: INFO: Created: latency-svc-mdf9m
Jan 31 14:44:26.035: INFO: Got endpoints: latency-svc-mdf9m [1.942467405s]
Jan 31 14:44:26.063: INFO: Created: latency-svc-vmgdx
Jan 31 14:44:26.075: INFO: Got endpoints: latency-svc-vmgdx [1.936809946s]
Jan 31 14:44:26.111: INFO: Created: latency-svc-nc9f4
Jan 31 14:44:26.236: INFO: Got endpoints: latency-svc-nc9f4 [1.975674176s]
Jan 31 14:44:26.262: INFO: Created: latency-svc-rzjpx
Jan 31 14:44:26.271: INFO: Got endpoints: latency-svc-rzjpx [1.960446819s]
Jan 31 14:44:26.486: INFO: Created: latency-svc-t2c8r
Jan 31 14:44:26.499: INFO: Got endpoints: latency-svc-t2c8r [2.170048562s]
Jan 31 14:44:26.551: INFO: Created: latency-svc-vmlrn
Jan 31 14:44:26.567: INFO: Got endpoints: latency-svc-vmlrn [2.051662185s]
Jan 31 14:44:26.724: INFO: Created: latency-svc-zl7dk
Jan 31 14:44:26.774: INFO: Created: latency-svc-wz66h
Jan 31 14:44:26.786: INFO: Got endpoints: latency-svc-zl7dk [1.97429634s]
Jan 31 14:44:26.819: INFO: Got endpoints: latency-svc-wz66h [1.841877859s]
Jan 31 14:44:26.825: INFO: Created: latency-svc-zkg7r
Jan 31 14:44:26.956: INFO: Got endpoints: latency-svc-zkg7r [1.944103033s]
Jan 31 14:44:26.995: INFO: Created: latency-svc-854td
Jan 31 14:44:27.007: INFO: Got endpoints: latency-svc-854td [1.826616561s]
Jan 31 14:44:27.132: INFO: Created: latency-svc-pdx6b
Jan 31 14:44:27.137: INFO: Got endpoints: latency-svc-pdx6b [1.741110632s]
Jan 31 14:44:27.215: INFO: Created: latency-svc-vcfs5
Jan 31 14:44:27.223: INFO: Got endpoints: latency-svc-vcfs5 [1.778609321s]
Jan 31 14:44:27.426: INFO: Created: latency-svc-m5xq8
Jan 31 14:44:27.431: INFO: Got endpoints: latency-svc-m5xq8 [1.797995669s]
Jan 31 14:44:27.481: INFO: Created: latency-svc-mpq8n
Jan 31 14:44:27.484: INFO: Got endpoints: latency-svc-mpq8n [1.775941739s]
Jan 31 14:44:27.521: INFO: Created: latency-svc-b4gx8
Jan 31 14:44:27.682: INFO: Got endpoints: latency-svc-b4gx8 [1.814331591s]
Jan 31 14:44:27.703: INFO: Created: latency-svc-4stgj
Jan 31 14:44:27.709: INFO: Got endpoints: latency-svc-4stgj [1.673830281s]
Jan 31 14:44:27.769: INFO: Created: latency-svc-x6r24
Jan 31 14:44:27.877: INFO: Got endpoints: latency-svc-x6r24 [1.802074934s]
Jan 31 14:44:27.888: INFO: Created: latency-svc-9krrv
Jan 31 14:44:27.899: INFO: Got endpoints: latency-svc-9krrv [1.661785331s]
Jan 31 14:44:27.948: INFO: Created: latency-svc-42lh4
Jan 31 14:44:27.955: INFO: Got endpoints: latency-svc-42lh4 [1.684155287s]
Jan 31 14:44:28.059: INFO: Created: latency-svc-k2r8f
Jan 31 14:44:28.065: INFO: Got endpoints: latency-svc-k2r8f [1.565441846s]
Jan 31 14:44:28.108: INFO: Created: latency-svc-n58c5
Jan 31 14:44:28.120: INFO: Got endpoints: latency-svc-n58c5 [1.552524487s]
Jan 31 14:44:28.163: INFO: Created: latency-svc-4t477
Jan 31 14:44:28.234: INFO: Got endpoints: latency-svc-4t477 [1.447155249s]
Jan 31 14:44:28.268: INFO: Created: latency-svc-85k28
Jan 31 14:44:28.276: INFO: Got endpoints: latency-svc-85k28 [1.455564617s]
Jan 31 14:44:28.318: INFO: Created: latency-svc-44p6j
Jan 31 14:44:28.325: INFO: Got endpoints: latency-svc-44p6j [1.368392757s]
Jan 31 14:44:28.456: INFO: Created: latency-svc-cn2lt
Jan 31 14:44:28.474: INFO: Got endpoints: latency-svc-cn2lt [1.466682256s]
Jan 31 14:44:28.531: INFO: Created: latency-svc-dctg6
Jan 31 14:44:28.538: INFO: Got endpoints: latency-svc-dctg6 [1.40117254s]
Jan 31 14:44:28.637: INFO: Created: latency-svc-jjkw5
Jan 31 14:44:28.647: INFO: Got endpoints: latency-svc-jjkw5 [1.424132974s]
Jan 31 14:44:28.737: INFO: Created: latency-svc-9c7sw
Jan 31 14:44:28.903: INFO: Got endpoints: latency-svc-9c7sw [1.471791626s]
Jan 31 14:44:28.911: INFO: Created: latency-svc-zm665
Jan 31 14:44:28.918: INFO: Got endpoints: latency-svc-zm665 [1.433997259s]
Jan 31 14:44:28.988: INFO: Created: latency-svc-4zjb7
Jan 31 14:44:28.998: INFO: Got endpoints: latency-svc-4zjb7 [1.31566104s]
Jan 31 14:44:29.075: INFO: Created: latency-svc-524jw
Jan 31 14:44:29.094: INFO: Got endpoints: latency-svc-524jw [1.384447132s]
Jan 31 14:44:29.132: INFO: Created: latency-svc-w26pv
Jan 31 14:44:29.137: INFO: Got endpoints: latency-svc-w26pv [1.259069249s]
Jan 31 14:44:29.247: INFO: Created: latency-svc-kvkcq
Jan 31 14:44:29.260: INFO: Got endpoints: latency-svc-kvkcq [1.360847825s]
Jan 31 14:44:29.318: INFO: Created: latency-svc-svt7t
Jan 31 14:44:29.467: INFO: Got endpoints: latency-svc-svt7t [1.511503855s]
Jan 31 14:44:29.479: INFO: Created: latency-svc-7z2vz
Jan 31 14:44:29.483: INFO: Got endpoints: latency-svc-7z2vz [1.417370835s]
Jan 31 14:44:29.527: INFO: Created: latency-svc-7jb6d
Jan 31 14:44:29.537: INFO: Got endpoints: latency-svc-7jb6d [1.416943699s]
Jan 31 14:44:29.629: INFO: Created: latency-svc-t9xgt
Jan 31 14:44:29.636: INFO: Got endpoints: latency-svc-t9xgt [1.401757743s]
Jan 31 14:44:29.677: INFO: Created: latency-svc-6xb4v
Jan 31 14:44:29.684: INFO: Got endpoints: latency-svc-6xb4v [1.408415284s]
Jan 31 14:44:29.771: INFO: Created: latency-svc-qqn88
Jan 31 14:44:29.789: INFO: Got endpoints: latency-svc-qqn88 [1.463466087s]
Jan 31 14:44:29.828: INFO: Created: latency-svc-5pklv
Jan 31 14:44:29.831: INFO: Got endpoints: latency-svc-5pklv [1.357194613s]
Jan 31 14:44:29.865: INFO: Created: latency-svc-xwpqb
Jan 31 14:44:29.942: INFO: Got endpoints: latency-svc-xwpqb [1.403869455s]
Jan 31 14:44:29.969: INFO: Created: latency-svc-b4xpt
Jan 31 14:44:29.984: INFO: Got endpoints: latency-svc-b4xpt [1.336754197s]
Jan 31 14:44:30.029: INFO: Created: latency-svc-vqxnx
Jan 31 14:44:30.039: INFO: Got endpoints: latency-svc-vqxnx [1.134915288s]
Jan 31 14:44:30.128: INFO: Created: latency-svc-z9zmd
Jan 31 14:44:30.136: INFO: Got endpoints: latency-svc-z9zmd [1.217795441s]
Jan 31 14:44:30.187: INFO: Created: latency-svc-vfhph
Jan 31 14:44:30.192: INFO: Got endpoints: latency-svc-vfhph [1.193571524s]
Jan 31 14:44:30.280: INFO: Created: latency-svc-cfdzm
Jan 31 14:44:30.292: INFO: Got endpoints: latency-svc-cfdzm [1.197917473s]
Jan 31 14:44:30.333: INFO: Created: latency-svc-r9drw
Jan 31 14:44:30.343: INFO: Got endpoints: latency-svc-r9drw [1.20552832s]
Jan 31 14:44:30.521: INFO: Created: latency-svc-dvrgh
Jan 31 14:44:30.534: INFO: Got endpoints: latency-svc-dvrgh [1.273815618s]
Jan 31 14:44:30.584: INFO: Created: latency-svc-lwcw4
Jan 31 14:44:30.703: INFO: Got endpoints: latency-svc-lwcw4 [1.23614276s]
Jan 31 14:44:30.755: INFO: Created: latency-svc-thphj
Jan 31 14:44:30.769: INFO: Got endpoints: latency-svc-thphj [1.285985893s]
Jan 31 14:44:30.961: INFO: Created: latency-svc-npccw
Jan 31 14:44:30.970: INFO: Got endpoints: latency-svc-npccw [1.432722946s]
Jan 31 14:44:31.136: INFO: Created: latency-svc-7qb4b
Jan 31 14:44:31.149: INFO: Got endpoints: latency-svc-7qb4b [1.512248439s]
Jan 31 14:44:31.331: INFO: Created: latency-svc-42nwt
Jan 31 14:44:31.343: INFO: Got endpoints: latency-svc-42nwt [1.658446023s]
Jan 31 14:44:31.514: INFO: Created: latency-svc-2pnp6
Jan 31 14:44:31.580: INFO: Got endpoints: latency-svc-2pnp6 [1.791226544s]
Jan 31 14:44:31.587: INFO: Created: latency-svc-2chr5
Jan 31 14:44:31.592: INFO: Got endpoints: latency-svc-2chr5 [1.760799544s]
Jan 31 14:44:31.724: INFO: Created: latency-svc-z2477
Jan 31 14:44:31.732: INFO: Got endpoints: latency-svc-z2477 [1.789913263s]
Jan 31 14:44:31.787: INFO: Created: latency-svc-9dwnt
Jan 31 14:44:31.913: INFO: Got endpoints: latency-svc-9dwnt [1.928376068s]
Jan 31 14:44:31.926: INFO: Created: latency-svc-z29sn
Jan 31 14:44:31.927: INFO: Got endpoints: latency-svc-z29sn [1.888344315s]
Jan 31 14:44:31.966: INFO: Created: latency-svc-prt9x
Jan 31 14:44:31.979: INFO: Got endpoints: latency-svc-prt9x [1.842409221s]
Jan 31 14:44:32.103: INFO: Created: latency-svc-rrpxm
Jan 31 14:44:32.116: INFO: Got endpoints: latency-svc-rrpxm [1.923822772s]
Jan 31 14:44:32.161: INFO: Created: latency-svc-9fbnd
Jan 31 14:44:32.235: INFO: Got endpoints: latency-svc-9fbnd [1.9423937s]
Jan 31 14:44:32.247: INFO: Created: latency-svc-5mkfs
Jan 31 14:44:32.252: INFO: Got endpoints: latency-svc-5mkfs [1.909024805s]
Jan 31 14:44:32.322: INFO: Created: latency-svc-nxjj7
Jan 31 14:44:32.326: INFO: Got endpoints: latency-svc-nxjj7 [1.791324828s]
Jan 31 14:44:32.449: INFO: Created: latency-svc-tz2zb
Jan 31 14:44:32.514: INFO: Got endpoints: latency-svc-tz2zb [1.809970619s]
Jan 31 14:44:32.616: INFO: Created: latency-svc-7n5ll
Jan 31 14:44:32.621: INFO: Got endpoints: latency-svc-7n5ll [1.851155092s]
Jan 31 14:44:32.676: INFO: Created: latency-svc-9sszn
Jan 31 14:44:32.689: INFO: Got endpoints: latency-svc-9sszn [1.718794444s]
Jan 31 14:44:32.918: INFO: Created: latency-svc-njg4z
Jan 31 14:44:32.968: INFO: Got endpoints: latency-svc-njg4z [1.819648183s]
Jan 31 14:44:32.970: INFO: Created: latency-svc-7sf4z
Jan 31 14:44:32.983: INFO: Got endpoints: latency-svc-7sf4z [1.639506531s]
Jan 31 14:44:33.092: INFO: Created: latency-svc-nlt2f
Jan 31 14:44:33.116: INFO: Got endpoints: latency-svc-nlt2f [1.535759726s]
Jan 31 14:44:33.148: INFO: Created: latency-svc-rz5gm
Jan 31 14:44:33.177: INFO: Got endpoints: latency-svc-rz5gm [1.584112722s]
Jan 31 14:44:33.261: INFO: Created: latency-svc-jh4nq
Jan 31 14:44:33.261: INFO: Got endpoints: latency-svc-jh4nq [1.528595799s]
Jan 31 14:44:33.302: INFO: Created: latency-svc-7vnff
Jan 31 14:44:33.450: INFO: Created: latency-svc-kprkn
Jan 31 14:44:33.451: INFO: Got endpoints: latency-svc-7vnff [1.536637351s]
Jan 31 14:44:33.496: INFO: Got endpoints: latency-svc-kprkn [1.568719939s]
Jan 31 14:44:33.554: INFO: Created: latency-svc-4qsgp
Jan 31 14:44:33.613: INFO: Got endpoints: latency-svc-4qsgp [1.633852122s]
Jan 31 14:44:33.633: INFO: Created: latency-svc-zczks
Jan 31 14:44:33.639: INFO: Got endpoints: latency-svc-zczks [1.521701425s]
Jan 31 14:44:33.697: INFO: Created: latency-svc-vx2fw
Jan 31 14:44:33.812: INFO: Got endpoints: latency-svc-vx2fw [1.576338428s]
Jan 31 14:44:33.813: INFO: Created: latency-svc-fgt58
Jan 31 14:44:33.826: INFO: Got endpoints: latency-svc-fgt58 [1.573801895s]
Jan 31 14:44:33.897: INFO: Created: latency-svc-5pcbp
Jan 31 14:44:33.961: INFO: Got endpoints: latency-svc-5pcbp [1.634996542s]
Jan 31 14:44:33.969: INFO: Created: latency-svc-92x6j
Jan 31 14:44:33.983: INFO: Got endpoints: latency-svc-92x6j [1.468680985s]
Jan 31 14:44:33.983: INFO: Latencies: [58.238974ms 165.727881ms 211.172609ms 281.455243ms 358.715599ms 407.48191ms 524.880022ms 607.990446ms 614.14738ms 764.672173ms 867.870527ms 1.081611315s 1.134384823s 1.134915288s 1.13555699s 1.137449216s 1.143792552s 1.155193229s 1.157945102s 1.161621377s 1.171662015s 1.184792587s 1.19035265s 1.193571524s 1.197917473s 1.202044238s 1.20552832s 1.208994538s 1.217795441s 1.233995374s 1.234429574s 1.235293815s 1.23614276s 1.256547643s 1.25671048s 1.259069249s 1.265132138s 1.266243915s 1.273815618s 1.285474857s 1.285985893s 1.291901571s 1.295445822s 1.298636898s 1.308708864s 1.31566104s 1.316882882s 1.324314595s 1.32697138s 1.330627681s 1.335043584s 1.335471453s 1.336070029s 1.336754197s 1.357194613s 1.359977381s 1.360847825s 1.366650954s 1.368392757s 1.369342542s 1.375518902s 1.378990144s 1.382542582s 1.382589865s 1.384447132s 1.38814819s 1.388633136s 1.392034348s 1.392593899s 1.396491503s 1.40117254s 1.401757743s 1.403869455s 1.408415284s 1.416943699s 1.417370835s 1.424015366s 1.424132974s 1.428633741s 1.429641447s 1.430116681s 1.432722946s 1.433997259s 1.434103806s 1.439522597s 1.439542059s 1.441343099s 1.447155249s 1.448116337s 1.450093736s 1.450843646s 1.455564617s 1.459389015s 1.463466087s 1.465170034s 1.466682256s 1.468150755s 1.468680985s 1.471791626s 1.472624189s 1.48353843s 1.484156242s 1.498135512s 1.503832959s 1.50751372s 1.508052075s 1.509861718s 1.510945183s 1.511503855s 1.512248439s 1.521701425s 1.528595799s 1.53111224s 1.535759726s 1.536637351s 1.551084501s 1.552524487s 1.557410558s 1.559552417s 1.565441846s 1.568719939s 1.568987806s 1.573801895s 1.576338428s 1.584112722s 1.584958137s 1.594405471s 1.62335154s 1.624943349s 1.633852122s 1.634996542s 1.635010718s 1.639506531s 1.658446023s 1.661785331s 1.669444663s 1.673830281s 1.676690596s 1.684155287s 1.702331723s 1.706440817s 1.71018146s 1.716843415s 1.718794444s 1.741110632s 1.74396462s 1.759532506s 1.760799544s 1.769511288s 1.772241196s 1.775941739s 1.778609321s 1.789913263s 1.791226544s 1.791324828s 1.795634418s 1.797995669s 1.802074934s 1.808596337s 1.809970619s 1.814331591s 1.819648183s 1.826616561s 1.841877859s 1.842409221s 1.850278288s 1.851155092s 1.853881893s 1.860120267s 1.860860674s 1.888344315s 1.901558665s 1.909024805s 1.923822772s 1.928376068s 1.929963626s 1.936809946s 1.9423937s 1.942467405s 1.944103033s 1.960446819s 1.97429634s 1.975674176s 2.024522109s 2.051662185s 2.132222368s 2.167591629s 2.170048562s 2.204474382s 2.268762518s 2.300741006s 2.303576063s 2.314311021s 2.323680449s 2.349025695s 2.367967535s 2.384317371s 2.389147926s 2.394675192s 2.400689824s]
Jan 31 14:44:33.984: INFO: 50 %ile: 1.48353843s
Jan 31 14:44:33.984: INFO: 90 %ile: 1.960446819s
Jan 31 14:44:33.984: INFO: 99 %ile: 2.394675192s
Jan 31 14:44:33.984: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:44:33.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-5759" for this suite.
Jan 31 14:45:10.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:45:10.130: INFO: namespace svc-latency-5759 deletion completed in 36.134419112s

• [SLOW TEST:65.537 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:45:10.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 31 14:45:10.297: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.01866ms)
Jan 31 14:45:10.305: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.604274ms)
Jan 31 14:45:10.316: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.21189ms)
Jan 31 14:45:10.325: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.361106ms)
Jan 31 14:45:10.335: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.323297ms)
Jan 31 14:45:10.341: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.637266ms)
Jan 31 14:45:10.352: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.302564ms)
Jan 31 14:45:10.361: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.296269ms)
Jan 31 14:45:10.433: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 71.801143ms)
Jan 31 14:45:10.443: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.570196ms)
Jan 31 14:45:10.451: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.117949ms)
Jan 31 14:45:10.458: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.371693ms)
Jan 31 14:45:10.464: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.374364ms)
Jan 31 14:45:10.473: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.309686ms)
Jan 31 14:45:10.478: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.263077ms)
Jan 31 14:45:10.481: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.907369ms)
Jan 31 14:45:10.487: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.530185ms)
Jan 31 14:45:10.491: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.392842ms)
Jan 31 14:45:10.497: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.179924ms)
Jan 31 14:45:10.505: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.673116ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:45:10.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1036" for this suite.
Jan 31 14:45:16.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:45:16.701: INFO: namespace proxy-1036 deletion completed in 6.189673575s

• [SLOW TEST:6.569 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:45:16.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-6200943c-e8fb-445c-a688-5441710fdcb3
STEP: Creating a pod to test consume secrets
Jan 31 14:45:16.830: INFO: Waiting up to 5m0s for pod "pod-secrets-3dce743f-72f5-4f14-a29f-80be67de3ce4" in namespace "secrets-2541" to be "success or failure"
Jan 31 14:45:16.874: INFO: Pod "pod-secrets-3dce743f-72f5-4f14-a29f-80be67de3ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 43.11174ms
Jan 31 14:45:18.891: INFO: Pod "pod-secrets-3dce743f-72f5-4f14-a29f-80be67de3ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060792619s
Jan 31 14:45:20.899: INFO: Pod "pod-secrets-3dce743f-72f5-4f14-a29f-80be67de3ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06838431s
Jan 31 14:45:22.916: INFO: Pod "pod-secrets-3dce743f-72f5-4f14-a29f-80be67de3ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085332952s
Jan 31 14:45:24.926: INFO: Pod "pod-secrets-3dce743f-72f5-4f14-a29f-80be67de3ce4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095021007s
STEP: Saw pod success
Jan 31 14:45:24.926: INFO: Pod "pod-secrets-3dce743f-72f5-4f14-a29f-80be67de3ce4" satisfied condition "success or failure"
Jan 31 14:45:24.931: INFO: Trying to get logs from node iruya-node pod pod-secrets-3dce743f-72f5-4f14-a29f-80be67de3ce4 container secret-volume-test: 
STEP: delete the pod
Jan 31 14:45:25.105: INFO: Waiting for pod pod-secrets-3dce743f-72f5-4f14-a29f-80be67de3ce4 to disappear
Jan 31 14:45:25.112: INFO: Pod pod-secrets-3dce743f-72f5-4f14-a29f-80be67de3ce4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:45:25.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2541" for this suite.
Jan 31 14:45:33.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:45:33.613: INFO: namespace secrets-2541 deletion completed in 8.483848076s

• [SLOW TEST:16.911 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:45:33.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-09d4def2-614a-406c-bad4-8257844d52f6
STEP: Creating a pod to test consume secrets
Jan 31 14:45:33.736: INFO: Waiting up to 5m0s for pod "pod-secrets-a21f389d-58ad-4f67-b792-9ac704751df5" in namespace "secrets-1647" to be "success or failure"
Jan 31 14:45:33.742: INFO: Pod "pod-secrets-a21f389d-58ad-4f67-b792-9ac704751df5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.947495ms
Jan 31 14:45:35.750: INFO: Pod "pod-secrets-a21f389d-58ad-4f67-b792-9ac704751df5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014271599s
Jan 31 14:45:37.766: INFO: Pod "pod-secrets-a21f389d-58ad-4f67-b792-9ac704751df5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030026218s
Jan 31 14:45:39.773: INFO: Pod "pod-secrets-a21f389d-58ad-4f67-b792-9ac704751df5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03740429s
Jan 31 14:45:41.808: INFO: Pod "pod-secrets-a21f389d-58ad-4f67-b792-9ac704751df5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072040028s
Jan 31 14:45:43.826: INFO: Pod "pod-secrets-a21f389d-58ad-4f67-b792-9ac704751df5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089485498s
STEP: Saw pod success
Jan 31 14:45:43.826: INFO: Pod "pod-secrets-a21f389d-58ad-4f67-b792-9ac704751df5" satisfied condition "success or failure"
Jan 31 14:45:43.830: INFO: Trying to get logs from node iruya-node pod pod-secrets-a21f389d-58ad-4f67-b792-9ac704751df5 container secret-volume-test: 
STEP: delete the pod
Jan 31 14:45:43.931: INFO: Waiting for pod pod-secrets-a21f389d-58ad-4f67-b792-9ac704751df5 to disappear
Jan 31 14:45:43.940: INFO: Pod pod-secrets-a21f389d-58ad-4f67-b792-9ac704751df5 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:45:43.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1647" for this suite.
Jan 31 14:45:50.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:45:50.217: INFO: namespace secrets-1647 deletion completed in 6.210369807s

• [SLOW TEST:16.604 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:45:50.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-1fc9c4c5-b53d-4ab6-a855-10260fc7465f in namespace container-probe-1094
Jan 31 14:45:58.421: INFO: Started pod liveness-1fc9c4c5-b53d-4ab6-a855-10260fc7465f in namespace container-probe-1094
STEP: checking the pod's current state and verifying that restartCount is present
Jan 31 14:45:58.428: INFO: Initial restart count of pod liveness-1fc9c4c5-b53d-4ab6-a855-10260fc7465f is 0
Jan 31 14:46:18.624: INFO: Restart count of pod container-probe-1094/liveness-1fc9c4c5-b53d-4ab6-a855-10260fc7465f is now 1 (20.195530662s elapsed)
Jan 31 14:46:38.809: INFO: Restart count of pod container-probe-1094/liveness-1fc9c4c5-b53d-4ab6-a855-10260fc7465f is now 2 (40.380515248s elapsed)
Jan 31 14:46:59.221: INFO: Restart count of pod container-probe-1094/liveness-1fc9c4c5-b53d-4ab6-a855-10260fc7465f is now 3 (1m0.792716074s elapsed)
Jan 31 14:47:19.329: INFO: Restart count of pod container-probe-1094/liveness-1fc9c4c5-b53d-4ab6-a855-10260fc7465f is now 4 (1m20.900579576s elapsed)
Jan 31 14:48:30.250: INFO: Restart count of pod container-probe-1094/liveness-1fc9c4c5-b53d-4ab6-a855-10260fc7465f is now 5 (2m31.821040662s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:48:30.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1094" for this suite.
Jan 31 14:48:36.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:48:36.451: INFO: namespace container-probe-1094 deletion completed in 6.160913762s

• [SLOW TEST:166.234 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:48:36.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 31 14:48:36.581: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 31 14:48:36.605: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 31 14:48:41.649: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 31 14:48:45.665: INFO: Creating deployment "test-rolling-update-deployment"
Jan 31 14:48:45.677: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 31 14:48:45.856: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 31 14:48:47.878: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 31 14:48:47.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078926, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 14:48:49.895: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078926, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 14:48:51.895: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078926, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078926, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716078925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 31 14:48:53.895: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 31 14:48:53.908: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-4545,SelfLink:/apis/apps/v1/namespaces/deployment-4545/deployments/test-rolling-update-deployment,UID:bfff2d75-ee4d-4318-8e76-811aa04b2a57,ResourceVersion:22578751,Generation:1,CreationTimestamp:2020-01-31 14:48:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-31 14:48:46 +0000 UTC 2020-01-31 14:48:46 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-31 14:48:53 +0000 UTC 2020-01-31 14:48:45 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 31 14:48:53.916: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-4545,SelfLink:/apis/apps/v1/namespaces/deployment-4545/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:35260903-7253-4d94-88ae-01cb5c84d9a2,ResourceVersion:22578741,Generation:1,CreationTimestamp:2020-01-31 14:48:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bfff2d75-ee4d-4318-8e76-811aa04b2a57 0xc002fba8e7 0xc002fba8e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 31 14:48:53.916: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 31 14:48:53.916: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-4545,SelfLink:/apis/apps/v1/namespaces/deployment-4545/replicasets/test-rolling-update-controller,UID:37252e56-7403-4125-b3f9-1ecbb357d4d1,ResourceVersion:22578750,Generation:2,CreationTimestamp:2020-01-31 14:48:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bfff2d75-ee4d-4318-8e76-811aa04b2a57 0xc002fba817 0xc002fba818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 31 14:48:53.924: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-s2fnr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-s2fnr,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-4545,SelfLink:/api/v1/namespaces/deployment-4545/pods/test-rolling-update-deployment-79f6b9d75c-s2fnr,UID:28164a84-2ed6-4eca-ac76-52e4f1f75d2c,ResourceVersion:22578740,Generation:0,CreationTimestamp:2020-01-31 14:48:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 35260903-7253-4d94-88ae-01cb5c84d9a2 0xc002fbb1b7 0xc002fbb1b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-264gp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-264gp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-264gp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fbb230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fbb250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:48:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:48:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:48:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 14:48:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-31 14:48:46 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-31 14:48:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://2ffd0b5ea971e3d2c0db9ec5671eb4585830aea68e483b8c57eb5028263c2ecc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:48:53.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4545" for this suite.
Jan 31 14:49:00.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:49:00.177: INFO: namespace deployment-4545 deletion completed in 6.233693448s

• [SLOW TEST:23.725 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:49:00.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jan 31 14:49:00.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 31 14:49:00.625: INFO: stderr: ""
Jan 31 14:49:00.625: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:49:00.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1282" for this suite.
Jan 31 14:49:06.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:49:06.825: INFO: namespace kubectl-1282 deletion completed in 6.185998893s

• [SLOW TEST:6.647 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:49:06.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:50:00.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5964" for this suite.
Jan 31 14:50:06.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:50:06.751: INFO: namespace container-runtime-5964 deletion completed in 6.111141333s

• [SLOW TEST:59.926 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:50:06.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Jan 31 14:50:07.478: INFO: created pod pod-service-account-defaultsa
Jan 31 14:50:07.478: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 31 14:50:07.500: INFO: created pod pod-service-account-mountsa
Jan 31 14:50:07.500: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 31 14:50:07.537: INFO: created pod pod-service-account-nomountsa
Jan 31 14:50:07.537: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 31 14:50:07.564: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 31 14:50:07.564: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 31 14:50:07.685: INFO: created pod pod-service-account-mountsa-mountspec
Jan 31 14:50:07.685: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 31 14:50:07.732: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 31 14:50:07.733: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 31 14:50:08.184: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 31 14:50:08.184: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 31 14:50:08.194: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 31 14:50:08.194: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 31 14:50:08.236: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 31 14:50:08.236: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:50:08.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4449" for this suite.
Jan 31 14:50:50.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:50:50.647: INFO: namespace svcaccounts-4449 deletion completed in 42.080575991s

• [SLOW TEST:43.897 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:50:50.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-865add92-9c8d-4374-b533-0f0fcde9cce3
STEP: Creating secret with name s-test-opt-upd-734ab5a8-153e-4665-a71a-203957f0921c
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-865add92-9c8d-4374-b533-0f0fcde9cce3
STEP: Updating secret s-test-opt-upd-734ab5a8-153e-4665-a71a-203957f0921c
STEP: Creating secret with name s-test-opt-create-cff6463b-4df2-4f45-8170-371efaf14e34
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:52:31.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1828" for this suite.
Jan 31 14:52:53.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:52:53.344: INFO: namespace secrets-1828 deletion completed in 22.125186887s

• [SLOW TEST:122.696 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:52:53.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-vgmm
STEP: Creating a pod to test atomic-volume-subpath
Jan 31 14:52:53.472: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-vgmm" in namespace "subpath-2796" to be "success or failure"
Jan 31 14:52:53.491: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Pending", Reason="", readiness=false. Elapsed: 18.759886ms
Jan 31 14:52:55.502: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030069502s
Jan 31 14:52:57.510: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038548081s
Jan 31 14:52:59.519: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047017796s
Jan 31 14:53:01.531: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059596447s
Jan 31 14:53:03.542: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Running", Reason="", readiness=true. Elapsed: 10.070538177s
Jan 31 14:53:05.561: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Running", Reason="", readiness=true. Elapsed: 12.089453472s
Jan 31 14:53:07.585: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Running", Reason="", readiness=true. Elapsed: 14.112853152s
Jan 31 14:53:09.629: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Running", Reason="", readiness=true. Elapsed: 16.157350726s
Jan 31 14:53:11.641: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Running", Reason="", readiness=true. Elapsed: 18.169154396s
Jan 31 14:53:13.654: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Running", Reason="", readiness=true. Elapsed: 20.181791918s
Jan 31 14:53:15.663: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Running", Reason="", readiness=true. Elapsed: 22.191147929s
Jan 31 14:53:17.672: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Running", Reason="", readiness=true. Elapsed: 24.200601311s
Jan 31 14:53:19.683: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Running", Reason="", readiness=true. Elapsed: 26.210782244s
Jan 31 14:53:21.695: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Running", Reason="", readiness=true. Elapsed: 28.222830834s
Jan 31 14:53:23.705: INFO: Pod "pod-subpath-test-secret-vgmm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.232949638s
STEP: Saw pod success
Jan 31 14:53:23.705: INFO: Pod "pod-subpath-test-secret-vgmm" satisfied condition "success or failure"
Jan 31 14:53:23.716: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-vgmm container test-container-subpath-secret-vgmm: 
STEP: delete the pod
Jan 31 14:53:24.130: INFO: Waiting for pod pod-subpath-test-secret-vgmm to disappear
Jan 31 14:53:24.142: INFO: Pod pod-subpath-test-secret-vgmm no longer exists
STEP: Deleting pod pod-subpath-test-secret-vgmm
Jan 31 14:53:24.142: INFO: Deleting pod "pod-subpath-test-secret-vgmm" in namespace "subpath-2796"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:53:24.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2796" for this suite.
Jan 31 14:53:30.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:53:30.377: INFO: namespace subpath-2796 deletion completed in 6.222937194s

• [SLOW TEST:37.032 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:53:30.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0131 14:54:01.076187       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 14:54:01.076: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:54:01.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2673" for this suite.
Jan 31 14:54:07.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:54:07.233: INFO: namespace gc-2673 deletion completed in 6.146355168s

• [SLOW TEST:36.856 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:54:07.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-a96695fa-58bb-48dc-94e5-92cc81a56266
STEP: Creating a pod to test consume configMaps
Jan 31 14:54:08.390: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-30a5516f-85f7-461b-aa84-df5d56c9889f" in namespace "projected-56" to be "success or failure"
Jan 31 14:54:08.437: INFO: Pod "pod-projected-configmaps-30a5516f-85f7-461b-aa84-df5d56c9889f": Phase="Pending", Reason="", readiness=false. Elapsed: 47.040961ms
Jan 31 14:54:10.909: INFO: Pod "pod-projected-configmaps-30a5516f-85f7-461b-aa84-df5d56c9889f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.518881436s
Jan 31 14:54:12.928: INFO: Pod "pod-projected-configmaps-30a5516f-85f7-461b-aa84-df5d56c9889f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.537568732s
Jan 31 14:54:14.937: INFO: Pod "pod-projected-configmaps-30a5516f-85f7-461b-aa84-df5d56c9889f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.546985564s
Jan 31 14:54:16.950: INFO: Pod "pod-projected-configmaps-30a5516f-85f7-461b-aa84-df5d56c9889f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.560142913s
Jan 31 14:54:19.030: INFO: Pod "pod-projected-configmaps-30a5516f-85f7-461b-aa84-df5d56c9889f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.639805981s
STEP: Saw pod success
Jan 31 14:54:19.030: INFO: Pod "pod-projected-configmaps-30a5516f-85f7-461b-aa84-df5d56c9889f" satisfied condition "success or failure"
Jan 31 14:54:19.036: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-30a5516f-85f7-461b-aa84-df5d56c9889f container projected-configmap-volume-test: 
STEP: delete the pod
Jan 31 14:54:19.341: INFO: Waiting for pod pod-projected-configmaps-30a5516f-85f7-461b-aa84-df5d56c9889f to disappear
Jan 31 14:54:19.375: INFO: Pod pod-projected-configmaps-30a5516f-85f7-461b-aa84-df5d56c9889f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:54:19.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-56" for this suite.
Jan 31 14:54:25.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:54:25.552: INFO: namespace projected-56 deletion completed in 6.169941426s

• [SLOW TEST:18.319 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:54:25.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 31 14:54:25.619: INFO: Waiting up to 5m0s for pod "pod-1de60453-a26b-4201-b3bb-1f104523580b" in namespace "emptydir-6861" to be "success or failure"
Jan 31 14:54:25.681: INFO: Pod "pod-1de60453-a26b-4201-b3bb-1f104523580b": Phase="Pending", Reason="", readiness=false. Elapsed: 61.548927ms
Jan 31 14:54:27.693: INFO: Pod "pod-1de60453-a26b-4201-b3bb-1f104523580b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074093734s
Jan 31 14:54:29.704: INFO: Pod "pod-1de60453-a26b-4201-b3bb-1f104523580b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08449884s
Jan 31 14:54:31.717: INFO: Pod "pod-1de60453-a26b-4201-b3bb-1f104523580b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097170403s
Jan 31 14:54:33.730: INFO: Pod "pod-1de60453-a26b-4201-b3bb-1f104523580b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110400065s
Jan 31 14:54:35.738: INFO: Pod "pod-1de60453-a26b-4201-b3bb-1f104523580b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.119066642s
STEP: Saw pod success
Jan 31 14:54:35.739: INFO: Pod "pod-1de60453-a26b-4201-b3bb-1f104523580b" satisfied condition "success or failure"
Jan 31 14:54:35.743: INFO: Trying to get logs from node iruya-node pod pod-1de60453-a26b-4201-b3bb-1f104523580b container test-container: 
STEP: delete the pod
Jan 31 14:54:35.804: INFO: Waiting for pod pod-1de60453-a26b-4201-b3bb-1f104523580b to disappear
Jan 31 14:54:35.821: INFO: Pod pod-1de60453-a26b-4201-b3bb-1f104523580b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:54:35.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6861" for this suite.
Jan 31 14:54:43.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:54:44.058: INFO: namespace emptydir-6861 deletion completed in 8.179086983s

• [SLOW TEST:18.506 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:54:44.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-0e46c23f-2ff4-4a4b-b142-469a5604449d
STEP: Creating a pod to test consume secrets
Jan 31 14:54:44.176: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d136b2c4-0d99-4c98-aed9-ffaf88b1488f" in namespace "projected-6356" to be "success or failure"
Jan 31 14:54:44.201: INFO: Pod "pod-projected-secrets-d136b2c4-0d99-4c98-aed9-ffaf88b1488f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.541022ms
Jan 31 14:54:46.213: INFO: Pod "pod-projected-secrets-d136b2c4-0d99-4c98-aed9-ffaf88b1488f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037633222s
Jan 31 14:54:48.296: INFO: Pod "pod-projected-secrets-d136b2c4-0d99-4c98-aed9-ffaf88b1488f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120500587s
Jan 31 14:54:50.305: INFO: Pod "pod-projected-secrets-d136b2c4-0d99-4c98-aed9-ffaf88b1488f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129736532s
Jan 31 14:54:52.322: INFO: Pod "pod-projected-secrets-d136b2c4-0d99-4c98-aed9-ffaf88b1488f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.146415972s
STEP: Saw pod success
Jan 31 14:54:52.322: INFO: Pod "pod-projected-secrets-d136b2c4-0d99-4c98-aed9-ffaf88b1488f" satisfied condition "success or failure"
Jan 31 14:54:52.327: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-d136b2c4-0d99-4c98-aed9-ffaf88b1488f container projected-secret-volume-test: 
STEP: delete the pod
Jan 31 14:54:52.519: INFO: Waiting for pod pod-projected-secrets-d136b2c4-0d99-4c98-aed9-ffaf88b1488f to disappear
Jan 31 14:54:52.529: INFO: Pod pod-projected-secrets-d136b2c4-0d99-4c98-aed9-ffaf88b1488f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:54:52.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6356" for this suite.
Jan 31 14:54:58.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:54:58.716: INFO: namespace projected-6356 deletion completed in 6.173885993s

• [SLOW TEST:14.657 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:54:58.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 31 14:54:58.843: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:55:13.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2010" for this suite.
Jan 31 14:55:19.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:55:20.029: INFO: namespace init-container-2010 deletion completed in 6.223976802s

• [SLOW TEST:21.313 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:55:20.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 31 14:55:28.239: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:55:28.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1381" for this suite.
Jan 31 14:55:34.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:55:34.459: INFO: namespace container-runtime-1381 deletion completed in 6.178145267s

• [SLOW TEST:14.429 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:55:34.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:55:42.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6473" for this suite.
Jan 31 14:56:28.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:56:28.932: INFO: namespace kubelet-test-6473 deletion completed in 46.22290043s

• [SLOW TEST:54.472 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:56:28.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan 31 14:56:28.993: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:56:42.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2773" for this suite.
Jan 31 14:56:48.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:56:49.002: INFO: namespace pods-2773 deletion completed in 6.216254917s

• [SLOW TEST:20.070 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:56:49.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2105
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 14:56:49.069: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 31 14:57:27.346: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2105 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 14:57:27.347: INFO: >>> kubeConfig: /root/.kube/config
I0131 14:57:27.450299       9 log.go:172] (0xc000610580) (0xc002abe640) Create stream
I0131 14:57:27.450496       9 log.go:172] (0xc000610580) (0xc002abe640) Stream added, broadcasting: 1
I0131 14:57:27.462105       9 log.go:172] (0xc000610580) Reply frame received for 1
I0131 14:57:27.462266       9 log.go:172] (0xc000610580) (0xc00261a780) Create stream
I0131 14:57:27.462283       9 log.go:172] (0xc000610580) (0xc00261a780) Stream added, broadcasting: 3
I0131 14:57:27.464812       9 log.go:172] (0xc000610580) Reply frame received for 3
I0131 14:57:27.464839       9 log.go:172] (0xc000610580) (0xc0027e0000) Create stream
I0131 14:57:27.464850       9 log.go:172] (0xc000610580) (0xc0027e0000) Stream added, broadcasting: 5
I0131 14:57:27.466150       9 log.go:172] (0xc000610580) Reply frame received for 5
I0131 14:57:27.624151       9 log.go:172] (0xc000610580) Data frame received for 3
I0131 14:57:27.624289       9 log.go:172] (0xc00261a780) (3) Data frame handling
I0131 14:57:27.624348       9 log.go:172] (0xc00261a780) (3) Data frame sent
I0131 14:57:27.752999       9 log.go:172] (0xc000610580) (0xc00261a780) Stream removed, broadcasting: 3
I0131 14:57:27.753608       9 log.go:172] (0xc000610580) Data frame received for 1
I0131 14:57:27.753804       9 log.go:172] (0xc000610580) (0xc0027e0000) Stream removed, broadcasting: 5
I0131 14:57:27.753946       9 log.go:172] (0xc002abe640) (1) Data frame handling
I0131 14:57:27.754008       9 log.go:172] (0xc002abe640) (1) Data frame sent
I0131 14:57:27.754043       9 log.go:172] (0xc000610580) (0xc002abe640) Stream removed, broadcasting: 1
I0131 14:57:27.754087       9 log.go:172] (0xc000610580) Go away received
I0131 14:57:27.754679       9 log.go:172] (0xc000610580) (0xc002abe640) Stream removed, broadcasting: 1
I0131 14:57:27.754696       9 log.go:172] (0xc000610580) (0xc00261a780) Stream removed, broadcasting: 3
I0131 14:57:27.754705       9 log.go:172] (0xc000610580) (0xc0027e0000) Stream removed, broadcasting: 5
Jan 31 14:57:27.754: INFO: Found all expected endpoints: [netserver-0]
Jan 31 14:57:27.764: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2105 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 14:57:27.764: INFO: >>> kubeConfig: /root/.kube/config
I0131 14:57:27.859825       9 log.go:172] (0xc000313c30) (0xc002ada640) Create stream
I0131 14:57:27.859998       9 log.go:172] (0xc000313c30) (0xc002ada640) Stream added, broadcasting: 1
I0131 14:57:27.868490       9 log.go:172] (0xc000313c30) Reply frame received for 1
I0131 14:57:27.868541       9 log.go:172] (0xc000313c30) (0xc002ada780) Create stream
I0131 14:57:27.868555       9 log.go:172] (0xc000313c30) (0xc002ada780) Stream added, broadcasting: 3
I0131 14:57:27.869965       9 log.go:172] (0xc000313c30) Reply frame received for 3
I0131 14:57:27.869989       9 log.go:172] (0xc000313c30) (0xc00261a8c0) Create stream
I0131 14:57:27.869999       9 log.go:172] (0xc000313c30) (0xc00261a8c0) Stream added, broadcasting: 5
I0131 14:57:27.871319       9 log.go:172] (0xc000313c30) Reply frame received for 5
I0131 14:57:27.995368       9 log.go:172] (0xc000313c30) Data frame received for 3
I0131 14:57:27.995491       9 log.go:172] (0xc002ada780) (3) Data frame handling
I0131 14:57:27.995518       9 log.go:172] (0xc002ada780) (3) Data frame sent
I0131 14:57:28.146913       9 log.go:172] (0xc000313c30) (0xc002ada780) Stream removed, broadcasting: 3
I0131 14:57:28.147089       9 log.go:172] (0xc000313c30) Data frame received for 1
I0131 14:57:28.147109       9 log.go:172] (0xc002ada640) (1) Data frame handling
I0131 14:57:28.147177       9 log.go:172] (0xc002ada640) (1) Data frame sent
I0131 14:57:28.147204       9 log.go:172] (0xc000313c30) (0xc002ada640) Stream removed, broadcasting: 1
I0131 14:57:28.147225       9 log.go:172] (0xc000313c30) (0xc00261a8c0) Stream removed, broadcasting: 5
I0131 14:57:28.147244       9 log.go:172] (0xc000313c30) Go away received
I0131 14:57:28.147692       9 log.go:172] (0xc000313c30) (0xc002ada640) Stream removed, broadcasting: 1
I0131 14:57:28.147712       9 log.go:172] (0xc000313c30) (0xc002ada780) Stream removed, broadcasting: 3
I0131 14:57:28.147720       9 log.go:172] (0xc000313c30) (0xc00261a8c0) Stream removed, broadcasting: 5
Jan 31 14:57:28.147: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:57:28.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2105" for this suite.
Jan 31 14:57:52.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:57:52.337: INFO: namespace pod-network-test-2105 deletion completed in 24.178190676s

• [SLOW TEST:63.335 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:57:52.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 31 14:57:52.399: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:58:11.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8105" for this suite.
Jan 31 14:58:33.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:58:33.247: INFO: namespace init-container-8105 deletion completed in 22.177263431s

• [SLOW TEST:40.909 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:58:33.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 31 14:58:33.412: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77937086-77d1-4374-9056-7a3a90b59979" in namespace "projected-4925" to be "success or failure"
Jan 31 14:58:33.418: INFO: Pod "downwardapi-volume-77937086-77d1-4374-9056-7a3a90b59979": Phase="Pending", Reason="", readiness=false. Elapsed: 5.29433ms
Jan 31 14:58:35.429: INFO: Pod "downwardapi-volume-77937086-77d1-4374-9056-7a3a90b59979": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016545299s
Jan 31 14:58:37.436: INFO: Pod "downwardapi-volume-77937086-77d1-4374-9056-7a3a90b59979": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024026959s
Jan 31 14:58:39.449: INFO: Pod "downwardapi-volume-77937086-77d1-4374-9056-7a3a90b59979": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036166425s
Jan 31 14:58:41.457: INFO: Pod "downwardapi-volume-77937086-77d1-4374-9056-7a3a90b59979": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04505493s
Jan 31 14:58:43.470: INFO: Pod "downwardapi-volume-77937086-77d1-4374-9056-7a3a90b59979": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057140945s
STEP: Saw pod success
Jan 31 14:58:43.470: INFO: Pod "downwardapi-volume-77937086-77d1-4374-9056-7a3a90b59979" satisfied condition "success or failure"
Jan 31 14:58:43.483: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-77937086-77d1-4374-9056-7a3a90b59979 container client-container: 
STEP: delete the pod
Jan 31 14:58:43.612: INFO: Waiting for pod downwardapi-volume-77937086-77d1-4374-9056-7a3a90b59979 to disappear
Jan 31 14:58:43.620: INFO: Pod downwardapi-volume-77937086-77d1-4374-9056-7a3a90b59979 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:58:43.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4925" for this suite.
Jan 31 14:58:51.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:58:51.961: INFO: namespace projected-4925 deletion completed in 8.331039155s

• [SLOW TEST:18.714 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:58:51.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-e6d6b5aa-dc45-4e4d-8249-c269788f875d
STEP: Creating a pod to test consume configMaps
Jan 31 14:58:52.041: INFO: Waiting up to 5m0s for pod "pod-configmaps-43ff911d-df94-4e70-a0ec-448c14b50ecf" in namespace "configmap-3422" to be "success or failure"
Jan 31 14:58:52.097: INFO: Pod "pod-configmaps-43ff911d-df94-4e70-a0ec-448c14b50ecf": Phase="Pending", Reason="", readiness=false. Elapsed: 56.369293ms
Jan 31 14:58:54.112: INFO: Pod "pod-configmaps-43ff911d-df94-4e70-a0ec-448c14b50ecf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07131075s
Jan 31 14:58:56.119: INFO: Pod "pod-configmaps-43ff911d-df94-4e70-a0ec-448c14b50ecf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077877216s
Jan 31 14:58:58.132: INFO: Pod "pod-configmaps-43ff911d-df94-4e70-a0ec-448c14b50ecf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090547396s
Jan 31 14:59:00.142: INFO: Pod "pod-configmaps-43ff911d-df94-4e70-a0ec-448c14b50ecf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10116681s
STEP: Saw pod success
Jan 31 14:59:00.143: INFO: Pod "pod-configmaps-43ff911d-df94-4e70-a0ec-448c14b50ecf" satisfied condition "success or failure"
Jan 31 14:59:00.147: INFO: Trying to get logs from node iruya-node pod pod-configmaps-43ff911d-df94-4e70-a0ec-448c14b50ecf container configmap-volume-test: 
STEP: delete the pod
Jan 31 14:59:00.324: INFO: Waiting for pod pod-configmaps-43ff911d-df94-4e70-a0ec-448c14b50ecf to disappear
Jan 31 14:59:00.328: INFO: Pod pod-configmaps-43ff911d-df94-4e70-a0ec-448c14b50ecf no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:59:00.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3422" for this suite.
Jan 31 14:59:06.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:59:06.506: INFO: namespace configmap-3422 deletion completed in 6.169595642s

• [SLOW TEST:14.545 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:59:06.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 31 14:59:06.618: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a22aaa6-c46f-4b17-9f3b-d4a89a96831c" in namespace "downward-api-7267" to be "success or failure"
Jan 31 14:59:06.631: INFO: Pod "downwardapi-volume-8a22aaa6-c46f-4b17-9f3b-d4a89a96831c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.354502ms
Jan 31 14:59:08.641: INFO: Pod "downwardapi-volume-8a22aaa6-c46f-4b17-9f3b-d4a89a96831c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022763207s
Jan 31 14:59:10.683: INFO: Pod "downwardapi-volume-8a22aaa6-c46f-4b17-9f3b-d4a89a96831c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064490146s
Jan 31 14:59:12.717: INFO: Pod "downwardapi-volume-8a22aaa6-c46f-4b17-9f3b-d4a89a96831c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098812053s
Jan 31 14:59:14.736: INFO: Pod "downwardapi-volume-8a22aaa6-c46f-4b17-9f3b-d4a89a96831c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.117839666s
STEP: Saw pod success
Jan 31 14:59:14.736: INFO: Pod "downwardapi-volume-8a22aaa6-c46f-4b17-9f3b-d4a89a96831c" satisfied condition "success or failure"
Jan 31 14:59:14.745: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8a22aaa6-c46f-4b17-9f3b-d4a89a96831c container client-container: 
STEP: delete the pod
Jan 31 14:59:14.830: INFO: Waiting for pod downwardapi-volume-8a22aaa6-c46f-4b17-9f3b-d4a89a96831c to disappear
Jan 31 14:59:14.836: INFO: Pod downwardapi-volume-8a22aaa6-c46f-4b17-9f3b-d4a89a96831c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 14:59:14.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7267" for this suite.
Jan 31 14:59:22.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 14:59:23.026: INFO: namespace downward-api-7267 deletion completed in 8.185923895s

• [SLOW TEST:16.520 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 14:59:23.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8387
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 14:59:23.129: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 31 14:59:59.384: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8387 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 14:59:59.384: INFO: >>> kubeConfig: /root/.kube/config
I0131 14:59:59.476322       9 log.go:172] (0xc002d324d0) (0xc001296d20) Create stream
I0131 14:59:59.476403       9 log.go:172] (0xc002d324d0) (0xc001296d20) Stream added, broadcasting: 1
I0131 14:59:59.485690       9 log.go:172] (0xc002d324d0) Reply frame received for 1
I0131 14:59:59.485772       9 log.go:172] (0xc002d324d0) (0xc001c4c000) Create stream
I0131 14:59:59.485786       9 log.go:172] (0xc002d324d0) (0xc001c4c000) Stream added, broadcasting: 3
I0131 14:59:59.488246       9 log.go:172] (0xc002d324d0) Reply frame received for 3
I0131 14:59:59.488284       9 log.go:172] (0xc002d324d0) (0xc001296dc0) Create stream
I0131 14:59:59.488294       9 log.go:172] (0xc002d324d0) (0xc001296dc0) Stream added, broadcasting: 5
I0131 14:59:59.491234       9 log.go:172] (0xc002d324d0) Reply frame received for 5
I0131 15:00:00.730668       9 log.go:172] (0xc002d324d0) Data frame received for 3
I0131 15:00:00.731032       9 log.go:172] (0xc001c4c000) (3) Data frame handling
I0131 15:00:00.731095       9 log.go:172] (0xc001c4c000) (3) Data frame sent
I0131 15:00:01.037206       9 log.go:172] (0xc002d324d0) (0xc001c4c000) Stream removed, broadcasting: 3
I0131 15:00:01.037433       9 log.go:172] (0xc002d324d0) Data frame received for 1
I0131 15:00:01.037456       9 log.go:172] (0xc001296d20) (1) Data frame handling
I0131 15:00:01.037498       9 log.go:172] (0xc001296d20) (1) Data frame sent
I0131 15:00:01.037535       9 log.go:172] (0xc002d324d0) (0xc001296dc0) Stream removed, broadcasting: 5
I0131 15:00:01.037843       9 log.go:172] (0xc002d324d0) (0xc001296d20) Stream removed, broadcasting: 1
I0131 15:00:01.037886       9 log.go:172] (0xc002d324d0) Go away received
I0131 15:00:01.038509       9 log.go:172] (0xc002d324d0) (0xc001296d20) Stream removed, broadcasting: 1
I0131 15:00:01.038524       9 log.go:172] (0xc002d324d0) (0xc001c4c000) Stream removed, broadcasting: 3
I0131 15:00:01.038536       9 log.go:172] (0xc002d324d0) (0xc001296dc0) Stream removed, broadcasting: 5
Jan 31 15:00:01.038: INFO: Found all expected endpoints: [netserver-0]
Jan 31 15:00:01.049: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8387 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 15:00:01.049: INFO: >>> kubeConfig: /root/.kube/config
I0131 15:00:01.109654       9 log.go:172] (0xc001e46370) (0xc000529ea0) Create stream
I0131 15:00:01.109870       9 log.go:172] (0xc001e46370) (0xc000529ea0) Stream added, broadcasting: 1
I0131 15:00:01.123887       9 log.go:172] (0xc001e46370) Reply frame received for 1
I0131 15:00:01.124211       9 log.go:172] (0xc001e46370) (0xc001297040) Create stream
I0131 15:00:01.124229       9 log.go:172] (0xc001e46370) (0xc001297040) Stream added, broadcasting: 3
I0131 15:00:01.127906       9 log.go:172] (0xc001e46370) Reply frame received for 3
I0131 15:00:01.127974       9 log.go:172] (0xc001e46370) (0xc00294a6e0) Create stream
I0131 15:00:01.127979       9 log.go:172] (0xc001e46370) (0xc00294a6e0) Stream added, broadcasting: 5
I0131 15:00:01.129781       9 log.go:172] (0xc001e46370) Reply frame received for 5
I0131 15:00:02.242876       9 log.go:172] (0xc001e46370) Data frame received for 3
I0131 15:00:02.242994       9 log.go:172] (0xc001297040) (3) Data frame handling
I0131 15:00:02.243046       9 log.go:172] (0xc001297040) (3) Data frame sent
I0131 15:00:02.364719       9 log.go:172] (0xc001e46370) (0xc001297040) Stream removed, broadcasting: 3
I0131 15:00:02.365065       9 log.go:172] (0xc001e46370) Data frame received for 1
I0131 15:00:02.365100       9 log.go:172] (0xc000529ea0) (1) Data frame handling
I0131 15:00:02.365214       9 log.go:172] (0xc000529ea0) (1) Data frame sent
I0131 15:00:02.365228       9 log.go:172] (0xc001e46370) (0xc000529ea0) Stream removed, broadcasting: 1
I0131 15:00:02.365635       9 log.go:172] (0xc001e46370) (0xc00294a6e0) Stream removed, broadcasting: 5
I0131 15:00:02.365852       9 log.go:172] (0xc001e46370) (0xc000529ea0) Stream removed, broadcasting: 1
I0131 15:00:02.365862       9 log.go:172] (0xc001e46370) (0xc001297040) Stream removed, broadcasting: 3
I0131 15:00:02.365866       9 log.go:172] (0xc001e46370) (0xc00294a6e0) Stream removed, broadcasting: 5
I0131 15:00:02.366648       9 log.go:172] (0xc001e46370) Go away received
Jan 31 15:00:02.367: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:00:02.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8387" for this suite.
Jan 31 15:00:24.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:00:25.133: INFO: namespace pod-network-test-8387 deletion completed in 22.75112656s

• [SLOW TEST:62.106 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:00:25.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:00:30.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6294" for this suite.
Jan 31 15:00:36.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:00:36.902: INFO: namespace watch-6294 deletion completed in 6.236049907s

• [SLOW TEST:11.769 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:00:36.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 31 15:03:36.156: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:03:36.173: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:03:38.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:03:38.184: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:03:40.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:03:40.196: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:03:42.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:03:42.184: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:03:44.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:03:44.183: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:03:46.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:03:46.188: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:03:48.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:03:48.187: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:03:50.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:03:50.189: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:03:52.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:03:52.185: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:03:54.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:03:54.181: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:03:56.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:03:56.197: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:03:58.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:03:58.186: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:04:00.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:04:00.185: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:04:02.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:04:02.184: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:04:04.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:04:04.184: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:04:06.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:04:06.182: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 31 15:04:08.174: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 31 15:04:08.182: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:04:08.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7173" for this suite.
Jan 31 15:04:30.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:04:30.360: INFO: namespace container-lifecycle-hook-7173 deletion completed in 22.16806547s

• [SLOW TEST:233.457 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:04:30.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 31 15:04:31.032: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c735a669-6015-4ee9-9381-4b6d4d4a899f" in namespace "downward-api-740" to be "success or failure"
Jan 31 15:04:31.113: INFO: Pod "downwardapi-volume-c735a669-6015-4ee9-9381-4b6d4d4a899f": Phase="Pending", Reason="", readiness=false. Elapsed: 80.158203ms
Jan 31 15:04:33.121: INFO: Pod "downwardapi-volume-c735a669-6015-4ee9-9381-4b6d4d4a899f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089152712s
Jan 31 15:04:35.134: INFO: Pod "downwardapi-volume-c735a669-6015-4ee9-9381-4b6d4d4a899f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102117933s
Jan 31 15:04:37.142: INFO: Pod "downwardapi-volume-c735a669-6015-4ee9-9381-4b6d4d4a899f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110089376s
Jan 31 15:04:39.151: INFO: Pod "downwardapi-volume-c735a669-6015-4ee9-9381-4b6d4d4a899f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118683048s
Jan 31 15:04:41.163: INFO: Pod "downwardapi-volume-c735a669-6015-4ee9-9381-4b6d4d4a899f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.130821007s
STEP: Saw pod success
Jan 31 15:04:41.164: INFO: Pod "downwardapi-volume-c735a669-6015-4ee9-9381-4b6d4d4a899f" satisfied condition "success or failure"
Jan 31 15:04:41.167: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c735a669-6015-4ee9-9381-4b6d4d4a899f container client-container: 
STEP: delete the pod
Jan 31 15:04:41.552: INFO: Waiting for pod downwardapi-volume-c735a669-6015-4ee9-9381-4b6d4d4a899f to disappear
Jan 31 15:04:41.569: INFO: Pod downwardapi-volume-c735a669-6015-4ee9-9381-4b6d4d4a899f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:04:41.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-740" for this suite.
Jan 31 15:04:47.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:04:47.728: INFO: namespace downward-api-740 deletion completed in 6.148039951s

• [SLOW TEST:17.368 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:04:47.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:05:20.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4888" for this suite.
Jan 31 15:05:26.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:05:26.466: INFO: namespace namespaces-4888 deletion completed in 6.168975293s
STEP: Destroying namespace "nsdeletetest-5393" for this suite.
Jan 31 15:05:26.471: INFO: Namespace nsdeletetest-5393 was already deleted
STEP: Destroying namespace "nsdeletetest-2900" for this suite.
Jan 31 15:05:32.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:05:32.679: INFO: namespace nsdeletetest-2900 deletion completed in 6.207796088s

• [SLOW TEST:44.951 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:05:32.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-322d52cb-851b-4c6c-b149-57da866fd82b
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:05:32.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3264" for this suite.
Jan 31 15:05:38.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:05:39.038: INFO: namespace configmap-3264 deletion completed in 6.265092199s

• [SLOW TEST:6.358 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:05:39.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 31 15:05:39.129: INFO: PodSpec: initContainers in spec.initContainers
Jan 31 15:06:38.917: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7b22d394-2750-4345-a54f-85e7c3a87656", GenerateName:"", Namespace:"init-container-1770", SelfLink:"/api/v1/namespaces/init-container-1770/pods/pod-init-7b22d394-2750-4345-a54f-85e7c3a87656", UID:"537fa0c5-66a0-4fc1-8a3c-c50e2e1b7601", ResourceVersion:"22581186", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716079939, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"129538865"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-76mnz", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002f82540), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-76mnz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-76mnz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-76mnz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000afcb48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0027308a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000afcca0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000afccc0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000afccc8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000afcccc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716079939, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716079939, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716079939, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716079939, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc00338f400), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001b36620)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001b36690)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://e30ae4812226dcac0bb6b8bfa6498e5f1c9cc906d7d148233d73b5db589a9c2c"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00338f440), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00338f420), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:06:38.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1770" for this suite.
Jan 31 15:07:03.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:07:03.126: INFO: namespace init-container-1770 deletion completed in 24.200011475s

• [SLOW TEST:84.088 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:07:03.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 31 15:07:11.344: INFO: Waiting up to 5m0s for pod "client-envvars-06fd8a84-06d1-41c4-b8f0-a6357a0dd3c6" in namespace "pods-658" to be "success or failure"
Jan 31 15:07:11.427: INFO: Pod "client-envvars-06fd8a84-06d1-41c4-b8f0-a6357a0dd3c6": Phase="Pending", Reason="", readiness=false. Elapsed: 82.660874ms
Jan 31 15:07:13.443: INFO: Pod "client-envvars-06fd8a84-06d1-41c4-b8f0-a6357a0dd3c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098857192s
Jan 31 15:07:15.455: INFO: Pod "client-envvars-06fd8a84-06d1-41c4-b8f0-a6357a0dd3c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111025627s
Jan 31 15:07:17.467: INFO: Pod "client-envvars-06fd8a84-06d1-41c4-b8f0-a6357a0dd3c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122957009s
Jan 31 15:07:19.474: INFO: Pod "client-envvars-06fd8a84-06d1-41c4-b8f0-a6357a0dd3c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130526338s
Jan 31 15:07:21.484: INFO: Pod "client-envvars-06fd8a84-06d1-41c4-b8f0-a6357a0dd3c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.140273032s
STEP: Saw pod success
Jan 31 15:07:21.484: INFO: Pod "client-envvars-06fd8a84-06d1-41c4-b8f0-a6357a0dd3c6" satisfied condition "success or failure"
Jan 31 15:07:21.489: INFO: Trying to get logs from node iruya-node pod client-envvars-06fd8a84-06d1-41c4-b8f0-a6357a0dd3c6 container env3cont: 
STEP: delete the pod
Jan 31 15:07:21.549: INFO: Waiting for pod client-envvars-06fd8a84-06d1-41c4-b8f0-a6357a0dd3c6 to disappear
Jan 31 15:07:21.559: INFO: Pod client-envvars-06fd8a84-06d1-41c4-b8f0-a6357a0dd3c6 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:07:21.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-658" for this suite.
Jan 31 15:08:07.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:08:07.815: INFO: namespace pods-658 deletion completed in 46.246785424s

• [SLOW TEST:64.689 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:08:07.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0131 15:08:11.896299       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 15:08:11.896: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:08:11.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2227" for this suite.
Jan 31 15:08:18.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:08:18.310: INFO: namespace gc-2227 deletion completed in 6.389255288s

• [SLOW TEST:10.494 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:08:18.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 31 15:08:18.529: INFO: Waiting up to 5m0s for pod "pod-f98abb5a-5511-4fc3-8001-3ef71de29ef4" in namespace "emptydir-2722" to be "success or failure"
Jan 31 15:08:18.605: INFO: Pod "pod-f98abb5a-5511-4fc3-8001-3ef71de29ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 74.668831ms
Jan 31 15:08:20.625: INFO: Pod "pod-f98abb5a-5511-4fc3-8001-3ef71de29ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095108659s
Jan 31 15:08:22.635: INFO: Pod "pod-f98abb5a-5511-4fc3-8001-3ef71de29ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104876886s
Jan 31 15:08:24.647: INFO: Pod "pod-f98abb5a-5511-4fc3-8001-3ef71de29ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117559746s
Jan 31 15:08:26.659: INFO: Pod "pod-f98abb5a-5511-4fc3-8001-3ef71de29ef4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.129387691s
STEP: Saw pod success
Jan 31 15:08:26.659: INFO: Pod "pod-f98abb5a-5511-4fc3-8001-3ef71de29ef4" satisfied condition "success or failure"
Jan 31 15:08:26.663: INFO: Trying to get logs from node iruya-node pod pod-f98abb5a-5511-4fc3-8001-3ef71de29ef4 container test-container: 
STEP: delete the pod
Jan 31 15:08:26.909: INFO: Waiting for pod pod-f98abb5a-5511-4fc3-8001-3ef71de29ef4 to disappear
Jan 31 15:08:26.928: INFO: Pod pod-f98abb5a-5511-4fc3-8001-3ef71de29ef4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:08:26.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2722" for this suite.
Jan 31 15:08:32.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:08:33.097: INFO: namespace emptydir-2722 deletion completed in 6.159748012s

• [SLOW TEST:14.785 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:08:33.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3213
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 31 15:08:33.177: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 31 15:09:07.461: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-3213 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 15:09:07.461: INFO: >>> kubeConfig: /root/.kube/config
I0131 15:09:07.543377       9 log.go:172] (0xc0014ba630) (0xc001eaf680) Create stream
I0131 15:09:07.543541       9 log.go:172] (0xc0014ba630) (0xc001eaf680) Stream added, broadcasting: 1
I0131 15:09:07.554453       9 log.go:172] (0xc0014ba630) Reply frame received for 1
I0131 15:09:07.554633       9 log.go:172] (0xc0014ba630) (0xc001296640) Create stream
I0131 15:09:07.554670       9 log.go:172] (0xc0014ba630) (0xc001296640) Stream added, broadcasting: 3
I0131 15:09:07.557279       9 log.go:172] (0xc0014ba630) Reply frame received for 3
I0131 15:09:07.557311       9 log.go:172] (0xc0014ba630) (0xc001eaf720) Create stream
I0131 15:09:07.557323       9 log.go:172] (0xc0014ba630) (0xc001eaf720) Stream added, broadcasting: 5
I0131 15:09:07.560684       9 log.go:172] (0xc0014ba630) Reply frame received for 5
I0131 15:09:07.802608       9 log.go:172] (0xc0014ba630) Data frame received for 3
I0131 15:09:07.802819       9 log.go:172] (0xc001296640) (3) Data frame handling
I0131 15:09:07.802880       9 log.go:172] (0xc001296640) (3) Data frame sent
I0131 15:09:07.950967       9 log.go:172] (0xc0014ba630) Data frame received for 1
I0131 15:09:07.951242       9 log.go:172] (0xc0014ba630) (0xc001296640) Stream removed, broadcasting: 3
I0131 15:09:07.951430       9 log.go:172] (0xc001eaf680) (1) Data frame handling
I0131 15:09:07.951774       9 log.go:172] (0xc001eaf680) (1) Data frame sent
I0131 15:09:07.951807       9 log.go:172] (0xc0014ba630) (0xc001eaf680) Stream removed, broadcasting: 1
I0131 15:09:07.951935       9 log.go:172] (0xc0014ba630) (0xc001eaf720) Stream removed, broadcasting: 5
I0131 15:09:07.951988       9 log.go:172] (0xc0014ba630) Go away received
I0131 15:09:07.952979       9 log.go:172] (0xc0014ba630) (0xc001eaf680) Stream removed, broadcasting: 1
I0131 15:09:07.953020       9 log.go:172] (0xc0014ba630) (0xc001296640) Stream removed, broadcasting: 3
I0131 15:09:07.953040       9 log.go:172] (0xc0014ba630) (0xc001eaf720) Stream removed, broadcasting: 5
Jan 31 15:09:07.953: INFO: Waiting for endpoints: map[]
Jan 31 15:09:07.967: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-3213 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 31 15:09:07.967: INFO: >>> kubeConfig: /root/.kube/config
I0131 15:09:08.040341       9 log.go:172] (0xc0026dc420) (0xc001c4c820) Create stream
I0131 15:09:08.040571       9 log.go:172] (0xc0026dc420) (0xc001c4c820) Stream added, broadcasting: 1
I0131 15:09:08.048428       9 log.go:172] (0xc0026dc420) Reply frame received for 1
I0131 15:09:08.048461       9 log.go:172] (0xc0026dc420) (0xc001eaf860) Create stream
I0131 15:09:08.048470       9 log.go:172] (0xc0026dc420) (0xc001eaf860) Stream added, broadcasting: 3
I0131 15:09:08.050120       9 log.go:172] (0xc0026dc420) Reply frame received for 3
I0131 15:09:08.050150       9 log.go:172] (0xc0026dc420) (0xc0012966e0) Create stream
I0131 15:09:08.050162       9 log.go:172] (0xc0026dc420) (0xc0012966e0) Stream added, broadcasting: 5
I0131 15:09:08.051279       9 log.go:172] (0xc0026dc420) Reply frame received for 5
I0131 15:09:08.143836       9 log.go:172] (0xc0026dc420) Data frame received for 3
I0131 15:09:08.143889       9 log.go:172] (0xc001eaf860) (3) Data frame handling
I0131 15:09:08.143911       9 log.go:172] (0xc001eaf860) (3) Data frame sent
I0131 15:09:08.282851       9 log.go:172] (0xc0026dc420) Data frame received for 1
I0131 15:09:08.283278       9 log.go:172] (0xc0026dc420) (0xc001eaf860) Stream removed, broadcasting: 3
I0131 15:09:08.283333       9 log.go:172] (0xc001c4c820) (1) Data frame handling
I0131 15:09:08.283362       9 log.go:172] (0xc0026dc420) (0xc0012966e0) Stream removed, broadcasting: 5
I0131 15:09:08.283638       9 log.go:172] (0xc001c4c820) (1) Data frame sent
I0131 15:09:08.283668       9 log.go:172] (0xc0026dc420) (0xc001c4c820) Stream removed, broadcasting: 1
I0131 15:09:08.283722       9 log.go:172] (0xc0026dc420) Go away received
I0131 15:09:08.284797       9 log.go:172] (0xc0026dc420) (0xc001c4c820) Stream removed, broadcasting: 1
I0131 15:09:08.284985       9 log.go:172] (0xc0026dc420) (0xc001eaf860) Stream removed, broadcasting: 3
I0131 15:09:08.284994       9 log.go:172] (0xc0026dc420) (0xc0012966e0) Stream removed, broadcasting: 5
Jan 31 15:09:08.285: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:09:08.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3213" for this suite.
Jan 31 15:09:32.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:09:32.485: INFO: namespace pod-network-test-3213 deletion completed in 24.186478513s

• [SLOW TEST:59.388 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:09:32.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 31 15:09:32.601: INFO: Waiting up to 5m0s for pod "downward-api-d7190cb6-8aa8-4454-a41d-14cca1d7eb41" in namespace "downward-api-2939" to be "success or failure"
Jan 31 15:09:32.608: INFO: Pod "downward-api-d7190cb6-8aa8-4454-a41d-14cca1d7eb41": Phase="Pending", Reason="", readiness=false. Elapsed: 6.319449ms
Jan 31 15:09:34.669: INFO: Pod "downward-api-d7190cb6-8aa8-4454-a41d-14cca1d7eb41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067074685s
Jan 31 15:09:36.683: INFO: Pod "downward-api-d7190cb6-8aa8-4454-a41d-14cca1d7eb41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081026049s
Jan 31 15:09:38.729: INFO: Pod "downward-api-d7190cb6-8aa8-4454-a41d-14cca1d7eb41": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127678845s
Jan 31 15:09:40.746: INFO: Pod "downward-api-d7190cb6-8aa8-4454-a41d-14cca1d7eb41": Phase="Pending", Reason="", readiness=false. Elapsed: 8.143995787s
Jan 31 15:09:42.752: INFO: Pod "downward-api-d7190cb6-8aa8-4454-a41d-14cca1d7eb41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150858891s
STEP: Saw pod success
Jan 31 15:09:42.753: INFO: Pod "downward-api-d7190cb6-8aa8-4454-a41d-14cca1d7eb41" satisfied condition "success or failure"
Jan 31 15:09:42.756: INFO: Trying to get logs from node iruya-node pod downward-api-d7190cb6-8aa8-4454-a41d-14cca1d7eb41 container dapi-container: 
STEP: delete the pod
Jan 31 15:09:42.810: INFO: Waiting for pod downward-api-d7190cb6-8aa8-4454-a41d-14cca1d7eb41 to disappear
Jan 31 15:09:42.874: INFO: Pod downward-api-d7190cb6-8aa8-4454-a41d-14cca1d7eb41 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:09:42.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2939" for this suite.
Jan 31 15:09:49.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:09:49.518: INFO: namespace downward-api-2939 deletion completed in 6.625951844s

• [SLOW TEST:17.033 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:09:49.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 31 15:10:19.706: INFO: Container started at 2020-01-31 15:09:56 +0000 UTC, pod became ready at 2020-01-31 15:10:18 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:10:19.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9154" for this suite.
Jan 31 15:10:41.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:10:41.894: INFO: namespace container-probe-9154 deletion completed in 22.182301962s

• [SLOW TEST:52.375 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:10:41.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-81b5ca3c-55ce-4203-aa3f-d408b8f1706a
STEP: Creating a pod to test consume secrets
Jan 31 15:10:42.038: INFO: Waiting up to 5m0s for pod "pod-secrets-9d149620-7ea7-42cc-8902-4cb1d349aca7" in namespace "secrets-208" to be "success or failure"
Jan 31 15:10:42.043: INFO: Pod "pod-secrets-9d149620-7ea7-42cc-8902-4cb1d349aca7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308908ms
Jan 31 15:10:44.057: INFO: Pod "pod-secrets-9d149620-7ea7-42cc-8902-4cb1d349aca7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018764969s
Jan 31 15:10:46.074: INFO: Pod "pod-secrets-9d149620-7ea7-42cc-8902-4cb1d349aca7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035488273s
Jan 31 15:10:48.080: INFO: Pod "pod-secrets-9d149620-7ea7-42cc-8902-4cb1d349aca7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041219388s
Jan 31 15:10:50.085: INFO: Pod "pod-secrets-9d149620-7ea7-42cc-8902-4cb1d349aca7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046344592s
STEP: Saw pod success
Jan 31 15:10:50.085: INFO: Pod "pod-secrets-9d149620-7ea7-42cc-8902-4cb1d349aca7" satisfied condition "success or failure"
Jan 31 15:10:50.087: INFO: Trying to get logs from node iruya-node pod pod-secrets-9d149620-7ea7-42cc-8902-4cb1d349aca7 container secret-volume-test: 
STEP: delete the pod
Jan 31 15:10:50.167: INFO: Waiting for pod pod-secrets-9d149620-7ea7-42cc-8902-4cb1d349aca7 to disappear
Jan 31 15:10:50.174: INFO: Pod pod-secrets-9d149620-7ea7-42cc-8902-4cb1d349aca7 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:10:50.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-208" for this suite.
Jan 31 15:10:56.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:10:56.401: INFO: namespace secrets-208 deletion completed in 6.222140681s
STEP: Destroying namespace "secret-namespace-1103" for this suite.
Jan 31 15:11:02.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:11:02.558: INFO: namespace secret-namespace-1103 deletion completed in 6.156103744s

• [SLOW TEST:20.663 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:11:02.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Jan 31 15:11:02.666: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix523706325/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:11:02.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7641" for this suite.
Jan 31 15:11:08.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:11:08.892: INFO: namespace kubectl-7641 deletion completed in 6.12140415s

• [SLOW TEST:6.333 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:11:08.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0131 15:11:51.584890       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 31 15:11:51.584: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:11:51.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4812" for this suite.
Jan 31 15:11:59.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:12:00.103: INFO: namespace gc-4812 deletion completed in 8.512902283s

• [SLOW TEST:51.210 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:12:00.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 31 15:12:02.436: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 31 15:12:08.518: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 31 15:12:18.885: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 31 15:12:18.954: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2986,SelfLink:/apis/apps/v1/namespaces/deployment-2986/deployments/test-cleanup-deployment,UID:2cf407b4-2f4e-438e-ae9c-b8af28775aa0,ResourceVersion:22582104,Generation:1,CreationTimestamp:2020-01-31 15:12:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 31 15:12:18.965: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2986,SelfLink:/apis/apps/v1/namespaces/deployment-2986/replicasets/test-cleanup-deployment-55bbcbc84c,UID:743411a3-000f-4350-8bd6-676b6833dd17,ResourceVersion:22582107,Generation:1,CreationTimestamp:2020-01-31 15:12:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 2cf407b4-2f4e-438e-ae9c-b8af28775aa0 0xc001c0bb47 0xc001c0bb48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 31 15:12:18.966: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan 31 15:12:18.966: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2986,SelfLink:/apis/apps/v1/namespaces/deployment-2986/replicasets/test-cleanup-controller,UID:70deda32-4ead-4228-9a76-7b820df74be9,ResourceVersion:22582106,Generation:1,CreationTimestamp:2020-01-31 15:12:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 2cf407b4-2f4e-438e-ae9c-b8af28775aa0 0xc001c0ba57 0xc001c0ba58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 31 15:12:19.039: INFO: Pod "test-cleanup-controller-84zsw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-84zsw,GenerateName:test-cleanup-controller-,Namespace:deployment-2986,SelfLink:/api/v1/namespaces/deployment-2986/pods/test-cleanup-controller-84zsw,UID:f44b6cdd-1f7c-4c37-bac8-e938caf295f3,ResourceVersion:22582102,Generation:0,CreationTimestamp:2020-01-31 15:12:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 70deda32-4ead-4228-9a76-7b820df74be9 0xc00337c567 0xc00337c568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sccx6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sccx6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sccx6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00337c5e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00337c600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 15:12:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 15:12:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 15:12:18 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-31 15:12:02 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-31 15:12:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-31 15:12:17 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fad80719325b57ae51a8d9d87894542c0debe9c7b737f7e14d2fdb6fdb60d504}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 31 15:12:19.039: INFO: Pod "test-cleanup-deployment-55bbcbc84c-xsjc6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-xsjc6,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2986,SelfLink:/api/v1/namespaces/deployment-2986/pods/test-cleanup-deployment-55bbcbc84c-xsjc6,UID:4f025a48-8a51-4acc-8936-b8e5a2ab40bc,ResourceVersion:22582108,Generation:0,CreationTimestamp:2020-01-31 15:12:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 743411a3-000f-4350-8bd6-676b6833dd17 0xc00337c6e7 0xc00337c6e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sccx6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sccx6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-sccx6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00337c750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00337c770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:12:19.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2986" for this suite.
Jan 31 15:12:27.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:12:27.344: INFO: namespace deployment-2986 deletion completed in 8.248307663s

• [SLOW TEST:27.242 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:12:27.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-c8g2m in namespace proxy-7022
I0131 15:12:27.618602       9 runners.go:180] Created replication controller with name: proxy-service-c8g2m, namespace: proxy-7022, replica count: 1
I0131 15:12:28.669948       9 runners.go:180] proxy-service-c8g2m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 15:12:29.670930       9 runners.go:180] proxy-service-c8g2m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 15:12:30.671723       9 runners.go:180] proxy-service-c8g2m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 15:12:31.673726       9 runners.go:180] proxy-service-c8g2m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 15:12:32.674338       9 runners.go:180] proxy-service-c8g2m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 15:12:33.674989       9 runners.go:180] proxy-service-c8g2m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 15:12:34.675865       9 runners.go:180] proxy-service-c8g2m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 15:12:35.676514       9 runners.go:180] proxy-service-c8g2m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0131 15:12:36.677738       9 runners.go:180] proxy-service-c8g2m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0131 15:12:37.679049       9 runners.go:180] proxy-service-c8g2m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0131 15:12:38.680182       9 runners.go:180] proxy-service-c8g2m Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 31 15:12:38.689: INFO: setup took 11.205548532s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 31 15:12:38.716: INFO: (0) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:1080/proxy/: test<... (200; 26.861564ms)
Jan 31 15:12:38.717: INFO: (0) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 27.361791ms)
Jan 31 15:12:38.717: INFO: (0) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:1080/proxy/: ... (200; 27.514055ms)
Jan 31 15:12:38.717: INFO: (0) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname1/proxy/: foo (200; 27.738259ms)
Jan 31 15:12:38.719: INFO: (0) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 29.176071ms)
Jan 31 15:12:38.719: INFO: (0) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname1/proxy/: foo (200; 29.307666ms)
Jan 31 15:12:38.723: INFO: (0) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname2/proxy/: bar (200; 33.593406ms)
Jan 31 15:12:38.723: INFO: (0) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname2/proxy/: bar (200; 33.931728ms)
Jan 31 15:12:38.724: INFO: (0) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 34.605301ms)
Jan 31 15:12:38.725: INFO: (0) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 35.340547ms)
Jan 31 15:12:38.726: INFO: (0) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n/proxy/: test (200; 36.71147ms)
Jan 31 15:12:38.732: INFO: (0) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:460/proxy/: tls baz (200; 42.887541ms)
Jan 31 15:12:38.732: INFO: (0) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: test<... (200; 15.390152ms)
Jan 31 15:12:38.758: INFO: (1) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:460/proxy/: tls baz (200; 15.673319ms)
Jan 31 15:12:38.758: INFO: (1) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:462/proxy/: tls qux (200; 16.192094ms)
Jan 31 15:12:38.766: INFO: (1) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname1/proxy/: foo (200; 24.136033ms)
Jan 31 15:12:38.766: INFO: (1) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 24.284866ms)
Jan 31 15:12:38.767: INFO: (1) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname1/proxy/: foo (200; 24.644892ms)
Jan 31 15:12:38.767: INFO: (1) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n/proxy/: test (200; 24.691483ms)
Jan 31 15:12:38.768: INFO: (1) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname1/proxy/: tls baz (200; 25.714115ms)
Jan 31 15:12:38.768: INFO: (1) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname2/proxy/: tls qux (200; 26.033175ms)
Jan 31 15:12:38.768: INFO: (1) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:1080/proxy/: ... (200; 26.825756ms)
Jan 31 15:12:38.769: INFO: (1) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname2/proxy/: bar (200; 27.374972ms)
Jan 31 15:12:38.769: INFO: (1) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 27.108746ms)
Jan 31 15:12:38.769: INFO: (1) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname2/proxy/: bar (200; 27.42871ms)
Jan 31 15:12:38.769: INFO: (1) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: ... (200; 9.721006ms)
Jan 31 15:12:38.779: INFO: (2) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 9.706932ms)
Jan 31 15:12:38.783: INFO: (2) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 13.629783ms)
Jan 31 15:12:38.784: INFO: (2) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n/proxy/: test (200; 14.606958ms)
Jan 31 15:12:38.785: INFO: (2) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:1080/proxy/: test<... (200; 15.25483ms)
Jan 31 15:12:38.785: INFO: (2) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: ... (200; 7.455575ms)
Jan 31 15:12:38.798: INFO: (3) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: test (200; 11.104692ms)
Jan 31 15:12:38.800: INFO: (3) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:1080/proxy/: test<... (200; 11.519381ms)
Jan 31 15:12:38.801: INFO: (3) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname2/proxy/: tls qux (200; 12.712521ms)
Jan 31 15:12:38.803: INFO: (3) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname1/proxy/: foo (200; 14.88995ms)
Jan 31 15:12:38.806: INFO: (3) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname2/proxy/: bar (200; 17.506706ms)
Jan 31 15:12:38.806: INFO: (3) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname2/proxy/: bar (200; 17.365541ms)
Jan 31 15:12:38.806: INFO: (3) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname1/proxy/: foo (200; 17.66564ms)
Jan 31 15:12:38.806: INFO: (3) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname1/proxy/: tls baz (200; 18.176572ms)
Jan 31 15:12:38.818: INFO: (4) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 11.155512ms)
Jan 31 15:12:38.817: INFO: (4) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 10.875013ms)
Jan 31 15:12:38.818: INFO: (4) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n/proxy/: test (200; 11.110051ms)
Jan 31 15:12:38.819: INFO: (4) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:460/proxy/: tls baz (200; 12.240304ms)
Jan 31 15:12:38.819: INFO: (4) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 12.440823ms)
Jan 31 15:12:38.819: INFO: (4) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:1080/proxy/: ... (200; 12.377442ms)
Jan 31 15:12:38.820: INFO: (4) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: test<... (200; 14.555674ms)
Jan 31 15:12:38.821: INFO: (4) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 14.709456ms)
Jan 31 15:12:38.822: INFO: (4) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname1/proxy/: foo (200; 15.714166ms)
Jan 31 15:12:38.822: INFO: (4) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname2/proxy/: bar (200; 16.025717ms)
Jan 31 15:12:38.823: INFO: (4) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname2/proxy/: tls qux (200; 16.139685ms)
Jan 31 15:12:38.823: INFO: (4) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname2/proxy/: bar (200; 16.822778ms)
Jan 31 15:12:38.824: INFO: (4) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname1/proxy/: tls baz (200; 17.366907ms)
Jan 31 15:12:38.832: INFO: (5) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 8.094174ms)
Jan 31 15:12:38.833: INFO: (5) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:1080/proxy/: test<... (200; 8.533372ms)
Jan 31 15:12:38.833: INFO: (5) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:462/proxy/: tls qux (200; 8.586045ms)
Jan 31 15:12:38.833: INFO: (5) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:1080/proxy/: ... (200; 8.303006ms)
Jan 31 15:12:38.834: INFO: (5) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n/proxy/: test (200; 9.183978ms)
Jan 31 15:12:38.834: INFO: (5) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 9.763203ms)
Jan 31 15:12:38.834: INFO: (5) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: test<... (200; 6.520624ms)
Jan 31 15:12:38.846: INFO: (6) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n/proxy/: test (200; 6.844331ms)
Jan 31 15:12:38.846: INFO: (6) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: ... (200; 8.446132ms)
Jan 31 15:12:38.848: INFO: (6) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 8.495836ms)
Jan 31 15:12:38.848: INFO: (6) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 8.613636ms)
Jan 31 15:12:38.850: INFO: (6) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname1/proxy/: foo (200; 10.730801ms)
Jan 31 15:12:38.852: INFO: (6) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname2/proxy/: bar (200; 12.824788ms)
Jan 31 15:12:38.852: INFO: (6) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname2/proxy/: tls qux (200; 12.880371ms)
Jan 31 15:12:38.852: INFO: (6) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname2/proxy/: bar (200; 12.914846ms)
Jan 31 15:12:38.852: INFO: (6) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname1/proxy/: foo (200; 13.224039ms)
Jan 31 15:12:38.854: INFO: (6) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname1/proxy/: tls baz (200; 14.628377ms)
Jan 31 15:12:38.868: INFO: (7) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 14.48912ms)
Jan 31 15:12:38.868: INFO: (7) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 14.630591ms)
Jan 31 15:12:38.869: INFO: (7) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n/proxy/: test (200; 15.728096ms)
Jan 31 15:12:38.870: INFO: (7) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 15.868976ms)
Jan 31 15:12:38.870: INFO: (7) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:1080/proxy/: test<... (200; 16.04923ms)
Jan 31 15:12:38.870: INFO: (7) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:462/proxy/: tls qux (200; 15.989408ms)
Jan 31 15:12:38.870: INFO: (7) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:1080/proxy/: ... (200; 16.010672ms)
Jan 31 15:12:38.870: INFO: (7) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:460/proxy/: tls baz (200; 16.137693ms)
Jan 31 15:12:38.870: INFO: (7) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 16.005972ms)
Jan 31 15:12:38.870: INFO: (7) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname1/proxy/: tls baz (200; 16.530557ms)
Jan 31 15:12:38.872: INFO: (7) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: test (200; 12.52537ms)
Jan 31 15:12:38.889: INFO: (8) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:462/proxy/: tls qux (200; 13.172175ms)
Jan 31 15:12:38.889: INFO: (8) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 13.314886ms)
Jan 31 15:12:38.889: INFO: (8) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 13.433182ms)
Jan 31 15:12:38.890: INFO: (8) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:1080/proxy/: ... (200; 13.764369ms)
Jan 31 15:12:38.890: INFO: (8) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname2/proxy/: bar (200; 13.808231ms)
Jan 31 15:12:38.890: INFO: (8) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname1/proxy/: foo (200; 13.80148ms)
Jan 31 15:12:38.890: INFO: (8) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:460/proxy/: tls baz (200; 14.392659ms)
Jan 31 15:12:38.890: INFO: (8) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:1080/proxy/: test<... (200; 14.372272ms)
Jan 31 15:12:38.890: INFO: (8) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname1/proxy/: foo (200; 14.411045ms)
Jan 31 15:12:38.892: INFO: (8) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname2/proxy/: bar (200; 16.489642ms)
Jan 31 15:12:38.901: INFO: (9) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:462/proxy/: tls qux (200; 8.702035ms)
Jan 31 15:12:38.902: INFO: (9) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 9.729731ms)
Jan 31 15:12:38.902: INFO: (9) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:1080/proxy/: ... (200; 9.725081ms)
Jan 31 15:12:38.902: INFO: (9) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:1080/proxy/: test<... (200; 10.048266ms)
Jan 31 15:12:38.903: INFO: (9) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 10.19156ms)
Jan 31 15:12:38.904: INFO: (9) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname1/proxy/: foo (200; 11.035479ms)
Jan 31 15:12:38.904: INFO: (9) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 11.098947ms)
Jan 31 15:12:38.904: INFO: (9) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n/proxy/: test (200; 11.545248ms)
Jan 31 15:12:38.904: INFO: (9) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:460/proxy/: tls baz (200; 11.713151ms)
Jan 31 15:12:38.905: INFO: (9) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 12.912688ms)
Jan 31 15:12:38.906: INFO: (9) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname1/proxy/: foo (200; 13.022346ms)
Jan 31 15:12:38.906: INFO: (9) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname2/proxy/: bar (200; 13.289137ms)
Jan 31 15:12:38.906: INFO: (9) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname1/proxy/: tls baz (200; 13.461207ms)
Jan 31 15:12:38.906: INFO: (9) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: ... (200; 13.139708ms)
Jan 31 15:12:38.920: INFO: (10) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:1080/proxy/: test<... (200; 13.239934ms)
Jan 31 15:12:38.920: INFO: (10) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 13.344658ms)
Jan 31 15:12:38.922: INFO: (10) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 15.206967ms)
Jan 31 15:12:38.923: INFO: (10) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:462/proxy/: tls qux (200; 16.498251ms)
Jan 31 15:12:38.924: INFO: (10) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname1/proxy/: foo (200; 16.758229ms)
Jan 31 15:12:38.924: INFO: (10) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 16.716014ms)
Jan 31 15:12:38.924: INFO: (10) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n/proxy/: test (200; 17.091075ms)
Jan 31 15:12:38.924: INFO: (10) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname2/proxy/: bar (200; 17.406684ms)
Jan 31 15:12:38.924: INFO: (10) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname1/proxy/: foo (200; 17.517573ms)
Jan 31 15:12:38.927: INFO: (10) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:460/proxy/: tls baz (200; 20.465293ms)
Jan 31 15:12:38.927: INFO: (10) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname2/proxy/: tls qux (200; 20.325162ms)
Jan 31 15:12:38.927: INFO: (10) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname2/proxy/: bar (200; 20.429748ms)
Jan 31 15:12:38.927: INFO: (10) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 20.589756ms)
Jan 31 15:12:38.928: INFO: (10) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname1/proxy/: tls baz (200; 20.756803ms)
Jan 31 15:12:38.935: INFO: (11) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n/proxy/: test (200; 7.322521ms)
Jan 31 15:12:38.935: INFO: (11) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 7.703439ms)
Jan 31 15:12:38.936: INFO: (11) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:462/proxy/: tls qux (200; 7.958272ms)
Jan 31 15:12:38.936: INFO: (11) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 8.503364ms)
Jan 31 15:12:38.938: INFO: (11) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 10.062278ms)
Jan 31 15:12:38.938: INFO: (11) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:1080/proxy/: ... (200; 10.314585ms)
Jan 31 15:12:38.938: INFO: (11) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: test<... (200; 10.298503ms)
Jan 31 15:12:38.938: INFO: (11) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname2/proxy/: bar (200; 10.811874ms)
Jan 31 15:12:38.939: INFO: (11) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:460/proxy/: tls baz (200; 11.281694ms)
Jan 31 15:12:38.940: INFO: (11) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname2/proxy/: bar (200; 12.446494ms)
Jan 31 15:12:38.941: INFO: (11) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname1/proxy/: foo (200; 12.893951ms)
Jan 31 15:12:38.941: INFO: (11) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname1/proxy/: tls baz (200; 12.727807ms)
Jan 31 15:12:38.941: INFO: (11) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname2/proxy/: tls qux (200; 13.15422ms)
Jan 31 15:12:38.941: INFO: (11) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname1/proxy/: foo (200; 12.993632ms)
Jan 31 15:12:38.946: INFO: (12) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:1080/proxy/: test<... (200; 4.708837ms)
Jan 31 15:12:38.946: INFO: (12) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 4.788919ms)
Jan 31 15:12:38.953: INFO: (12) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:1080/proxy/: ... (200; 12.289661ms)
Jan 31 15:12:38.953: INFO: (12) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:460/proxy/: tls baz (200; 12.274839ms)
Jan 31 15:12:38.954: INFO: (12) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname2/proxy/: bar (200; 13.16117ms)
Jan 31 15:12:38.955: INFO: (12) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 13.979989ms)
Jan 31 15:12:38.955: INFO: (12) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 14.133097ms)
Jan 31 15:12:38.955: INFO: (12) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname2/proxy/: tls qux (200; 14.162125ms)
Jan 31 15:12:38.955: INFO: (12) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:462/proxy/: tls qux (200; 14.109185ms)
Jan 31 15:12:38.956: INFO: (12) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n/proxy/: test (200; 14.80719ms)
Jan 31 15:12:38.956: INFO: (12) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname2/proxy/: bar (200; 15.016377ms)
Jan 31 15:12:38.956: INFO: (12) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: test (200; 15.340313ms)
Jan 31 15:12:38.979: INFO: (13) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 19.777549ms)
Jan 31 15:12:38.980: INFO: (13) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 19.966546ms)
Jan 31 15:12:38.980: INFO: (13) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:1080/proxy/: test<... (200; 20.45994ms)
Jan 31 15:12:38.980: INFO: (13) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:462/proxy/: tls qux (200; 20.701081ms)
Jan 31 15:12:38.980: INFO: (13) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:460/proxy/: tls baz (200; 20.607867ms)
Jan 31 15:12:38.980: INFO: (13) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 20.067807ms)
Jan 31 15:12:38.980: INFO: (13) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 20.360678ms)
Jan 31 15:12:38.981: INFO: (13) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname1/proxy/: foo (200; 21.093366ms)
Jan 31 15:12:38.981: INFO: (13) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname1/proxy/: foo (200; 21.670685ms)
Jan 31 15:12:38.982: INFO: (13) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:1080/proxy/: ... (200; 21.637423ms)
Jan 31 15:12:38.982: INFO: (13) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: test<... (200; 7.123703ms)
Jan 31 15:12:38.992: INFO: (14) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 7.097319ms)
Jan 31 15:12:38.992: INFO: (14) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:462/proxy/: tls qux (200; 7.993235ms)
Jan 31 15:12:38.993: INFO: (14) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n/proxy/: test (200; 7.511611ms)
Jan 31 15:12:38.993: INFO: (14) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:1080/proxy/: ... (200; 7.509541ms)
Jan 31 15:12:38.993: INFO: (14) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 8.102429ms)
Jan 31 15:12:38.993: INFO: (14) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: ... (200; 6.914581ms)
Jan 31 15:12:39.004: INFO: (15) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:460/proxy/: tls baz (200; 7.12298ms)
Jan 31 15:12:39.008: INFO: (15) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname1/proxy/: foo (200; 10.362438ms)
Jan 31 15:12:39.009: INFO: (15) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname1/proxy/: tls baz (200; 11.546791ms)
Jan 31 15:12:39.009: INFO: (15) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname2/proxy/: tls qux (200; 11.367549ms)
Jan 31 15:12:39.009: INFO: (15) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:462/proxy/: tls qux (200; 11.584857ms)
Jan 31 15:12:39.009: INFO: (15) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname2/proxy/: bar (200; 11.526033ms)
Jan 31 15:12:39.009: INFO: (15) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname1/proxy/: foo (200; 11.484105ms)
Jan 31 15:12:39.010: INFO: (15) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 12.251177ms)
Jan 31 15:12:39.010: INFO: (15) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname2/proxy/: bar (200; 12.146657ms)
Jan 31 15:12:39.010: INFO: (15) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n/proxy/: test (200; 12.555817ms)
Jan 31 15:12:39.010: INFO: (15) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: test<... (200; 13.123245ms)
Jan 31 15:12:39.018: INFO: (16) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:1080/proxy/: test<... (200; 6.911563ms)
Jan 31 15:12:39.020: INFO: (16) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:1080/proxy/: ... (200; 8.705335ms)
Jan 31 15:12:39.020: INFO: (16) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 9.103889ms)
Jan 31 15:12:39.020: INFO: (16) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: test (200; 10.077378ms)
Jan 31 15:12:39.021: INFO: (16) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 10.229219ms)
Jan 31 15:12:39.022: INFO: (16) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 10.865939ms)
Jan 31 15:12:39.022: INFO: (16) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:460/proxy/: tls baz (200; 11.321926ms)
Jan 31 15:12:39.024: INFO: (16) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 12.83086ms)
Jan 31 15:12:39.038: INFO: (16) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname1/proxy/: foo (200; 27.018706ms)
Jan 31 15:12:39.038: INFO: (16) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname2/proxy/: bar (200; 27.091645ms)
Jan 31 15:12:39.038: INFO: (16) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname2/proxy/: bar (200; 27.111385ms)
Jan 31 15:12:39.038: INFO: (16) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname1/proxy/: foo (200; 27.336714ms)
Jan 31 15:12:39.038: INFO: (16) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname1/proxy/: tls baz (200; 27.384439ms)
Jan 31 15:12:39.039: INFO: (16) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname2/proxy/: tls qux (200; 28.017322ms)
Jan 31 15:12:39.045: INFO: (17) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:460/proxy/: tls baz (200; 6.13315ms)
Jan 31 15:12:39.047: INFO: (17) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:462/proxy/: tls qux (200; 7.259707ms)
Jan 31 15:12:39.047: INFO: (17) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname1/proxy/: tls baz (200; 7.718851ms)
Jan 31 15:12:39.048: INFO: (17) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 8.802856ms)
Jan 31 15:12:39.049: INFO: (17) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:1080/proxy/: test<... (200; 9.956562ms)
Jan 31 15:12:39.049: INFO: (17) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n/proxy/: test (200; 10.082539ms)
Jan 31 15:12:39.049: INFO: (17) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:1080/proxy/: ... (200; 10.212389ms)
Jan 31 15:12:39.050: INFO: (17) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: test<... (200; 4.131736ms)
Jan 31 15:12:39.058: INFO: (18) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:1080/proxy/: ... (200; 4.255533ms)
Jan 31 15:12:39.062: INFO: (18) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:462/proxy/: tls qux (200; 8.72826ms)
Jan 31 15:12:39.067: INFO: (18) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname2/proxy/: bar (200; 12.88884ms)
Jan 31 15:12:39.067: INFO: (18) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname1/proxy/: foo (200; 13.14691ms)
Jan 31 15:12:39.068: INFO: (18) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname2/proxy/: tls qux (200; 13.972548ms)
Jan 31 15:12:39.068: INFO: (18) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 13.9981ms)
Jan 31 15:12:39.068: INFO: (18) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 14.077981ms)
Jan 31 15:12:39.069: INFO: (18) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname1/proxy/: tls baz (200; 15.277278ms)
Jan 31 15:12:39.069: INFO: (18) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: test (200; 15.076463ms)
Jan 31 15:12:39.069: INFO: (18) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname1/proxy/: foo (200; 15.25503ms)
Jan 31 15:12:39.069: INFO: (18) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:460/proxy/: tls baz (200; 15.356604ms)
Jan 31 15:12:39.069: INFO: (18) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 15.304604ms)
Jan 31 15:12:39.069: INFO: (18) /api/v1/namespaces/proxy-7022/services/http:proxy-service-c8g2m:portname2/proxy/: bar (200; 15.238327ms)
Jan 31 15:12:39.069: INFO: (18) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 15.310454ms)
Jan 31 15:12:39.108: INFO: (19) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:160/proxy/: foo (200; 39.271433ms)
Jan 31 15:12:39.110: INFO: (19) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname2/proxy/: tls qux (200; 41.305308ms)
Jan 31 15:12:39.114: INFO: (19) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:1080/proxy/: test<... (200; 44.5815ms)
Jan 31 15:12:39.114: INFO: (19) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:460/proxy/: tls baz (200; 44.591139ms)
Jan 31 15:12:39.114: INFO: (19) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n/proxy/: test (200; 44.743518ms)
Jan 31 15:12:39.114: INFO: (19) /api/v1/namespaces/proxy-7022/services/https:proxy-service-c8g2m:tlsportname1/proxy/: tls baz (200; 44.679541ms)
Jan 31 15:12:39.114: INFO: (19) /api/v1/namespaces/proxy-7022/pods/proxy-service-c8g2m-4x46n:162/proxy/: bar (200; 44.677139ms)
Jan 31 15:12:39.114: INFO: (19) /api/v1/namespaces/proxy-7022/pods/http:proxy-service-c8g2m-4x46n:1080/proxy/: ... (200; 44.90362ms)
Jan 31 15:12:39.114: INFO: (19) /api/v1/namespaces/proxy-7022/services/proxy-service-c8g2m:portname1/proxy/: foo (200; 44.77928ms)
Jan 31 15:12:39.114: INFO: (19) /api/v1/namespaces/proxy-7022/pods/https:proxy-service-c8g2m-4x46n:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Jan 31 15:12:52.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 31 15:12:55.309: INFO: stderr: ""
Jan 31 15:12:55.309: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:12:55.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-327" for this suite.
Jan 31 15:13:01.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:13:01.450: INFO: namespace kubectl-327 deletion completed in 6.132089144s

• [SLOW TEST:8.696 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:13:01.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 31 15:13:01.539: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 31 15:13:01.547: INFO: Waiting for terminating namespaces to be deleted...
Jan 31 15:13:01.549: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 31 15:13:01.557: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 31 15:13:01.557: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 15:13:01.557: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 31 15:13:01.557: INFO: 	Container weave ready: true, restart count 0
Jan 31 15:13:01.557: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 15:13:01.557: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 31 15:13:01.595: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 31 15:13:01.595: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 31 15:13:01.595: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 31 15:13:01.595: INFO: 	Container coredns ready: true, restart count 0
Jan 31 15:13:01.595: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 31 15:13:01.595: INFO: 	Container etcd ready: true, restart count 0
Jan 31 15:13:01.596: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 31 15:13:01.596: INFO: 	Container weave ready: true, restart count 0
Jan 31 15:13:01.596: INFO: 	Container weave-npc ready: true, restart count 0
Jan 31 15:13:01.596: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 31 15:13:01.596: INFO: 	Container coredns ready: true, restart count 0
Jan 31 15:13:01.596: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 31 15:13:01.596: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 31 15:13:01.596: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 31 15:13:01.596: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 31 15:13:01.596: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 31 15:13:01.596: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-290ccdfa-cca8-40ff-8aae-7e8671810e37 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-290ccdfa-cca8-40ff-8aae-7e8671810e37 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-290ccdfa-cca8-40ff-8aae-7e8671810e37
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:13:19.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9786" for this suite.
Jan 31 15:13:34.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:13:34.117: INFO: namespace sched-pred-9786 deletion completed in 14.142297927s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:32.667 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:13:34.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 31 15:13:34.226: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a11ab502-adaf-4da2-9d49-a64334c772c2" in namespace "downward-api-6326" to be "success or failure"
Jan 31 15:13:34.233: INFO: Pod "downwardapi-volume-a11ab502-adaf-4da2-9d49-a64334c772c2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.149967ms
Jan 31 15:13:36.260: INFO: Pod "downwardapi-volume-a11ab502-adaf-4da2-9d49-a64334c772c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034702432s
Jan 31 15:13:38.293: INFO: Pod "downwardapi-volume-a11ab502-adaf-4da2-9d49-a64334c772c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067074359s
Jan 31 15:13:40.307: INFO: Pod "downwardapi-volume-a11ab502-adaf-4da2-9d49-a64334c772c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080917143s
Jan 31 15:13:42.316: INFO: Pod "downwardapi-volume-a11ab502-adaf-4da2-9d49-a64334c772c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.09051462s
STEP: Saw pod success
Jan 31 15:13:42.317: INFO: Pod "downwardapi-volume-a11ab502-adaf-4da2-9d49-a64334c772c2" satisfied condition "success or failure"
Jan 31 15:13:42.322: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a11ab502-adaf-4da2-9d49-a64334c772c2 container client-container: 
STEP: delete the pod
Jan 31 15:13:42.482: INFO: Waiting for pod downwardapi-volume-a11ab502-adaf-4da2-9d49-a64334c772c2 to disappear
Jan 31 15:13:42.491: INFO: Pod downwardapi-volume-a11ab502-adaf-4da2-9d49-a64334c772c2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:13:42.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6326" for this suite.
Jan 31 15:13:48.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:13:48.735: INFO: namespace downward-api-6326 deletion completed in 6.238847369s

• [SLOW TEST:14.618 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:13:48.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-8062, will wait for the garbage collector to delete the pods
Jan 31 15:14:00.898: INFO: Deleting Job.batch foo took: 9.244393ms
Jan 31 15:14:01.199: INFO: Terminating Job.batch foo pods took: 300.475242ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:14:46.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8062" for this suite.
Jan 31 15:14:52.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:14:52.820: INFO: namespace job-8062 deletion completed in 6.198369999s

• [SLOW TEST:64.085 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:14:52.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 31 15:15:01.531: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3212 pod-service-account-9efcd469-53ce-4346-8bb7-c4c1fbed849e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 31 15:15:02.150: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3212 pod-service-account-9efcd469-53ce-4346-8bb7-c4c1fbed849e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 31 15:15:02.828: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3212 pod-service-account-9efcd469-53ce-4346-8bb7-c4c1fbed849e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:15:03.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3212" for this suite.
Jan 31 15:15:09.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:15:09.619: INFO: namespace svcaccounts-3212 deletion completed in 6.258719915s

• [SLOW TEST:16.797 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:15:09.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 31 15:15:09.782: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aee4c4b5-95c0-42b7-97ef-fa650a1f8f96" in namespace "projected-9315" to be "success or failure"
Jan 31 15:15:09.793: INFO: Pod "downwardapi-volume-aee4c4b5-95c0-42b7-97ef-fa650a1f8f96": Phase="Pending", Reason="", readiness=false. Elapsed: 10.445286ms
Jan 31 15:15:11.807: INFO: Pod "downwardapi-volume-aee4c4b5-95c0-42b7-97ef-fa650a1f8f96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024300002s
Jan 31 15:15:13.819: INFO: Pod "downwardapi-volume-aee4c4b5-95c0-42b7-97ef-fa650a1f8f96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036965352s
Jan 31 15:15:15.853: INFO: Pod "downwardapi-volume-aee4c4b5-95c0-42b7-97ef-fa650a1f8f96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070578825s
Jan 31 15:15:17.872: INFO: Pod "downwardapi-volume-aee4c4b5-95c0-42b7-97ef-fa650a1f8f96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089258714s
Jan 31 15:15:19.890: INFO: Pod "downwardapi-volume-aee4c4b5-95c0-42b7-97ef-fa650a1f8f96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.107889979s
STEP: Saw pod success
Jan 31 15:15:19.891: INFO: Pod "downwardapi-volume-aee4c4b5-95c0-42b7-97ef-fa650a1f8f96" satisfied condition "success or failure"
Jan 31 15:15:19.903: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-aee4c4b5-95c0-42b7-97ef-fa650a1f8f96 container client-container: 
STEP: delete the pod
Jan 31 15:15:20.033: INFO: Waiting for pod downwardapi-volume-aee4c4b5-95c0-42b7-97ef-fa650a1f8f96 to disappear
Jan 31 15:15:20.043: INFO: Pod downwardapi-volume-aee4c4b5-95c0-42b7-97ef-fa650a1f8f96 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:15:20.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9315" for this suite.
Jan 31 15:15:26.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:15:26.360: INFO: namespace projected-9315 deletion completed in 6.303607545s

• [SLOW TEST:16.740 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:15:26.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 31 15:15:35.702: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:15:35.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9482" for this suite.
Jan 31 15:15:41.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:15:42.079: INFO: namespace container-runtime-9482 deletion completed in 6.251359931s

• [SLOW TEST:15.717 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:15:42.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Jan 31 15:15:42.130: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 31 15:15:42.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9520'
Jan 31 15:15:42.747: INFO: stderr: ""
Jan 31 15:15:42.747: INFO: stdout: "service/redis-slave created\n"
Jan 31 15:15:42.748: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 31 15:15:42.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9520'
Jan 31 15:15:43.253: INFO: stderr: ""
Jan 31 15:15:43.253: INFO: stdout: "service/redis-master created\n"
Jan 31 15:15:43.254: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 31 15:15:43.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9520'
Jan 31 15:15:43.866: INFO: stderr: ""
Jan 31 15:15:43.866: INFO: stdout: "service/frontend created\n"
Jan 31 15:15:43.868: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 31 15:15:43.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9520'
Jan 31 15:15:44.421: INFO: stderr: ""
Jan 31 15:15:44.421: INFO: stdout: "deployment.apps/frontend created\n"
Jan 31 15:15:44.422: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 31 15:15:44.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9520'
Jan 31 15:15:45.053: INFO: stderr: ""
Jan 31 15:15:45.054: INFO: stdout: "deployment.apps/redis-master created\n"
Jan 31 15:15:45.054: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 31 15:15:45.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9520'
Jan 31 15:15:46.405: INFO: stderr: ""
Jan 31 15:15:46.406: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Jan 31 15:15:46.406: INFO: Waiting for all frontend pods to be Running.
Jan 31 15:16:11.459: INFO: Waiting for frontend to serve content.
Jan 31 15:16:13.725: INFO: Trying to add a new entry to the guestbook.
Jan 31 15:16:13.765: INFO: Verifying that added entry can be retrieved.
Jan 31 15:16:13.805: INFO: Failed to get response from guestbook. err: , response: {"data": ""}
STEP: using delete to clean up resources
Jan 31 15:16:18.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9520'
Jan 31 15:16:19.150: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 15:16:19.150: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 15:16:19.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9520'
Jan 31 15:16:19.376: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 15:16:19.376: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 15:16:19.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9520'
Jan 31 15:16:19.501: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 15:16:19.502: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 15:16:19.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9520'
Jan 31 15:16:19.677: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 15:16:19.677: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 15:16:19.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9520'
Jan 31 15:16:19.802: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 15:16:19.802: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 31 15:16:19.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9520'
Jan 31 15:16:20.118: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 31 15:16:20.119: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:16:20.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9520" for this suite.
Jan 31 15:17:00.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:17:00.488: INFO: namespace kubectl-9520 deletion completed in 40.352426112s

• [SLOW TEST:78.409 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 31 15:17:00.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 31 15:17:00.669: INFO: Waiting up to 5m0s for pod "pod-d9abc2e3-94d6-4792-b989-eaf3afc6e365" in namespace "emptydir-1483" to be "success or failure"
Jan 31 15:17:00.676: INFO: Pod "pod-d9abc2e3-94d6-4792-b989-eaf3afc6e365": Phase="Pending", Reason="", readiness=false. Elapsed: 7.05872ms
Jan 31 15:17:02.690: INFO: Pod "pod-d9abc2e3-94d6-4792-b989-eaf3afc6e365": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020729457s
Jan 31 15:17:04.710: INFO: Pod "pod-d9abc2e3-94d6-4792-b989-eaf3afc6e365": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040380552s
Jan 31 15:17:06.729: INFO: Pod "pod-d9abc2e3-94d6-4792-b989-eaf3afc6e365": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059342743s
Jan 31 15:17:08.741: INFO: Pod "pod-d9abc2e3-94d6-4792-b989-eaf3afc6e365": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07121012s
Jan 31 15:17:10.752: INFO: Pod "pod-d9abc2e3-94d6-4792-b989-eaf3afc6e365": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08222257s
STEP: Saw pod success
Jan 31 15:17:10.752: INFO: Pod "pod-d9abc2e3-94d6-4792-b989-eaf3afc6e365" satisfied condition "success or failure"
Jan 31 15:17:10.758: INFO: Trying to get logs from node iruya-node pod pod-d9abc2e3-94d6-4792-b989-eaf3afc6e365 container test-container: 
STEP: delete the pod
Jan 31 15:17:10.879: INFO: Waiting for pod pod-d9abc2e3-94d6-4792-b989-eaf3afc6e365 to disappear
Jan 31 15:17:10.892: INFO: Pod pod-d9abc2e3-94d6-4792-b989-eaf3afc6e365 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 31 15:17:10.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1483" for this suite.
Jan 31 15:17:17.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 31 15:17:17.135: INFO: namespace emptydir-1483 deletion completed in 6.136745759s

• [SLOW TEST:16.646 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSJan 31 15:17:17.135: INFO: Running AfterSuite actions on all nodes
Jan 31 15:17:17.135: INFO: Running AfterSuite actions on node 1
Jan 31 15:17:17.135: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769

Ran 215 of 4412 Specs in 8464.601 seconds
FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped
--- FAIL: TestE2E (8465.03s)
FAIL