I0811 07:45:20.837543 6 e2e.go:243] Starting e2e run "4f8c74a2-8748-43fc-a184-4dc31b6847fb" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597131919 - Will randomize all specs Will run 215 of 4413 specs Aug 11 07:45:21.013: INFO: >>> kubeConfig: /root/.kube/config Aug 11 07:45:21.017: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 11 07:45:21.037: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 11 07:45:21.066: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 11 07:45:21.066: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 11 07:45:21.066: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 11 07:45:21.075: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 11 07:45:21.075: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 11 07:45:21.075: INFO: e2e test version: v1.15.12 Aug 11 07:45:21.076: INFO: kube-apiserver version: v1.15.12 SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:45:21.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test Aug 11 07:45:21.141: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:45:29.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6359" for this suite. Aug 11 07:45:35.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:45:35.274: INFO: namespace kubelet-test-6359 deletion completed in 6.108571209s • [SLOW TEST:14.197 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:45:35.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Aug 11 07:45:35.454: INFO: Waiting up to 5m0s for pod "client-containers-6bc0575e-7649-468f-a48f-5551e8c728fc" in namespace "containers-3535" to be "success or failure" Aug 11 07:45:35.467: INFO: Pod "client-containers-6bc0575e-7649-468f-a48f-5551e8c728fc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.211209ms Aug 11 07:45:37.516: INFO: Pod "client-containers-6bc0575e-7649-468f-a48f-5551e8c728fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061787259s Aug 11 07:45:39.520: INFO: Pod "client-containers-6bc0575e-7649-468f-a48f-5551e8c728fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065857728s STEP: Saw pod success Aug 11 07:45:39.520: INFO: Pod "client-containers-6bc0575e-7649-468f-a48f-5551e8c728fc" satisfied condition "success or failure" Aug 11 07:45:39.524: INFO: Trying to get logs from node iruya-worker2 pod client-containers-6bc0575e-7649-468f-a48f-5551e8c728fc container test-container: STEP: delete the pod Aug 11 07:45:39.666: INFO: Waiting for pod client-containers-6bc0575e-7649-468f-a48f-5551e8c728fc to disappear Aug 11 07:45:39.675: INFO: Pod client-containers-6bc0575e-7649-468f-a48f-5551e8c728fc no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:45:39.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3535" for this suite. Aug 11 07:45:45.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:45:45.867: INFO: namespace containers-3535 deletion completed in 6.188982329s • [SLOW TEST:10.593 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:45:45.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 11 07:45:45.950: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22eaf960-efbe-4569-a7ab-b2155cb2ba94" in namespace "downward-api-5429" to be "success or failure" Aug 11 07:45:45.963: INFO: Pod "downwardapi-volume-22eaf960-efbe-4569-a7ab-b2155cb2ba94": Phase="Pending", Reason="", readiness=false. Elapsed: 13.27239ms Aug 11 07:45:47.968: INFO: Pod "downwardapi-volume-22eaf960-efbe-4569-a7ab-b2155cb2ba94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017792166s Aug 11 07:45:49.973: INFO: Pod "downwardapi-volume-22eaf960-efbe-4569-a7ab-b2155cb2ba94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022817656s STEP: Saw pod success Aug 11 07:45:49.973: INFO: Pod "downwardapi-volume-22eaf960-efbe-4569-a7ab-b2155cb2ba94" satisfied condition "success or failure" Aug 11 07:45:49.976: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-22eaf960-efbe-4569-a7ab-b2155cb2ba94 container client-container: STEP: delete the pod Aug 11 07:45:50.000: INFO: Waiting for pod downwardapi-volume-22eaf960-efbe-4569-a7ab-b2155cb2ba94 to disappear Aug 11 07:45:50.046: INFO: Pod downwardapi-volume-22eaf960-efbe-4569-a7ab-b2155cb2ba94 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:45:50.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5429" for this suite. Aug 11 07:45:56.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:45:56.154: INFO: namespace downward-api-5429 deletion completed in 6.103366152s • [SLOW TEST:10.286 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:45:56.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6923.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6923.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6923.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6923.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6923.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6923.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 11 07:46:02.312: INFO: DNS probes using dns-6923/dns-test-8144b418-cc4a-48e6-bfde-b0794e9f8a0c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:46:02.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6923" for this suite. Aug 11 07:46:08.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:46:08.506: INFO: namespace dns-6923 deletion completed in 6.119813496s • [SLOW TEST:12.352 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:46:08.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-f85b0e0a-b17f-464d-896a-16bf0b0320a4 STEP: Creating a pod to test consume secrets Aug 11 07:46:08.610: INFO: Waiting up to 5m0s for pod "pod-secrets-eeb6926b-400d-4202-8a48-5d30bc8f4e83" in namespace "secrets-1424" to be "success or failure" Aug 11 07:46:08.612: INFO: Pod "pod-secrets-eeb6926b-400d-4202-8a48-5d30bc8f4e83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233137ms Aug 11 07:46:10.616: INFO: Pod "pod-secrets-eeb6926b-400d-4202-8a48-5d30bc8f4e83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006344938s Aug 11 07:46:12.621: INFO: Pod "pod-secrets-eeb6926b-400d-4202-8a48-5d30bc8f4e83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010687796s STEP: Saw pod success Aug 11 07:46:12.621: INFO: Pod "pod-secrets-eeb6926b-400d-4202-8a48-5d30bc8f4e83" satisfied condition "success or failure" Aug 11 07:46:12.625: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-eeb6926b-400d-4202-8a48-5d30bc8f4e83 container secret-volume-test: STEP: delete the pod Aug 11 07:46:12.661: INFO: Waiting for pod pod-secrets-eeb6926b-400d-4202-8a48-5d30bc8f4e83 to disappear Aug 11 07:46:12.675: INFO: Pod pod-secrets-eeb6926b-400d-4202-8a48-5d30bc8f4e83 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:46:12.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1424" for this suite. Aug 11 07:46:18.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:46:18.793: INFO: namespace secrets-1424 deletion completed in 6.113989882s • [SLOW TEST:10.286 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:46:18.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 11 07:46:19.018: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad8f690d-e58c-4d4d-ae93-4d7059b1817d" in namespace "downward-api-691" to be "success or failure" Aug 11 07:46:19.029: INFO: Pod "downwardapi-volume-ad8f690d-e58c-4d4d-ae93-4d7059b1817d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.58577ms Aug 11 07:46:21.056: INFO: Pod "downwardapi-volume-ad8f690d-e58c-4d4d-ae93-4d7059b1817d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037441576s Aug 11 07:46:23.107: INFO: Pod "downwardapi-volume-ad8f690d-e58c-4d4d-ae93-4d7059b1817d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088351049s Aug 11 07:46:25.275: INFO: Pod "downwardapi-volume-ad8f690d-e58c-4d4d-ae93-4d7059b1817d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.25674237s STEP: Saw pod success Aug 11 07:46:25.275: INFO: Pod "downwardapi-volume-ad8f690d-e58c-4d4d-ae93-4d7059b1817d" satisfied condition "success or failure" Aug 11 07:46:25.279: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ad8f690d-e58c-4d4d-ae93-4d7059b1817d container client-container: STEP: delete the pod Aug 11 07:46:25.667: INFO: Waiting for pod downwardapi-volume-ad8f690d-e58c-4d4d-ae93-4d7059b1817d to disappear Aug 11 07:46:25.772: INFO: Pod downwardapi-volume-ad8f690d-e58c-4d4d-ae93-4d7059b1817d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:46:25.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-691" for this suite. Aug 11 07:46:31.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:46:31.903: INFO: namespace downward-api-691 deletion completed in 6.116083619s • [SLOW TEST:13.111 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:46:31.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:46:36.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2373" for this suite. Aug 11 07:47:18.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:47:18.107: INFO: namespace kubelet-test-2373 deletion completed in 42.091447979s • [SLOW TEST:46.204 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:47:18.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 11 07:47:18.426: INFO: Waiting up to 5m0s for pod "downwardapi-volume-acdd87da-f234-4f62-ba3d-1eceebc8f796" in namespace "projected-2007" to be "success or failure" Aug 11 07:47:18.468: INFO: Pod "downwardapi-volume-acdd87da-f234-4f62-ba3d-1eceebc8f796": Phase="Pending", Reason="", readiness=false. Elapsed: 41.532331ms Aug 11 07:47:20.472: INFO: Pod "downwardapi-volume-acdd87da-f234-4f62-ba3d-1eceebc8f796": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045786438s Aug 11 07:47:22.482: INFO: Pod "downwardapi-volume-acdd87da-f234-4f62-ba3d-1eceebc8f796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056149164s STEP: Saw pod success Aug 11 07:47:22.482: INFO: Pod "downwardapi-volume-acdd87da-f234-4f62-ba3d-1eceebc8f796" satisfied condition "success or failure" Aug 11 07:47:22.512: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-acdd87da-f234-4f62-ba3d-1eceebc8f796 container client-container: STEP: delete the pod Aug 11 07:47:22.526: INFO: Waiting for pod downwardapi-volume-acdd87da-f234-4f62-ba3d-1eceebc8f796 to disappear Aug 11 07:47:22.530: INFO: Pod downwardapi-volume-acdd87da-f234-4f62-ba3d-1eceebc8f796 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:47:22.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2007" for this suite. Aug 11 07:47:28.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:47:28.623: INFO: namespace projected-2007 deletion completed in 6.089552775s • [SLOW TEST:10.516 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:47:28.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 11 07:47:28.704: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 11 07:47:33.709: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:47:34.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1352" for this suite. Aug 11 07:47:40.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:47:42.202: INFO: namespace replication-controller-1352 deletion completed in 7.46759287s • [SLOW TEST:13.578 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:47:42.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0811 07:48:12.791347 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 11 07:48:12.791: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:48:12.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5680" for this suite. Aug 11 07:48:18.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:48:18.877: INFO: namespace gc-5680 deletion completed in 6.082440905s • [SLOW TEST:36.675 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:48:18.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Aug 11 07:48:18.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Aug 11 07:48:21.609: INFO: stderr: "" Aug 11 07:48:21.609: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:38261\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:38261/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:48:21.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9819" for this suite. Aug 11 07:48:27.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:48:27.692: INFO: namespace kubectl-9819 deletion completed in 6.079011279s • [SLOW TEST:8.814 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:48:27.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-e25d9d14-e3fa-495c-a8a5-627cc86fda9c STEP: Creating a pod to test consume secrets Aug 11 07:48:27.846: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c33d218d-ea54-4a92-b3e7-46d297154570" in namespace "projected-2805" to be "success or failure" Aug 11 07:48:27.905: INFO: Pod "pod-projected-secrets-c33d218d-ea54-4a92-b3e7-46d297154570": Phase="Pending", Reason="", readiness=false. Elapsed: 59.028663ms Aug 11 07:48:29.917: INFO: Pod "pod-projected-secrets-c33d218d-ea54-4a92-b3e7-46d297154570": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071114503s Aug 11 07:48:32.019: INFO: Pod "pod-projected-secrets-c33d218d-ea54-4a92-b3e7-46d297154570": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.173076685s STEP: Saw pod success Aug 11 07:48:32.019: INFO: Pod "pod-projected-secrets-c33d218d-ea54-4a92-b3e7-46d297154570" satisfied condition "success or failure" Aug 11 07:48:32.072: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-c33d218d-ea54-4a92-b3e7-46d297154570 container projected-secret-volume-test: STEP: delete the pod Aug 11 07:48:32.175: INFO: Waiting for pod pod-projected-secrets-c33d218d-ea54-4a92-b3e7-46d297154570 to disappear Aug 11 07:48:32.178: INFO: Pod pod-projected-secrets-c33d218d-ea54-4a92-b3e7-46d297154570 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:48:32.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2805" for this suite. Aug 11 07:48:38.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:48:38.300: INFO: namespace projected-2805 deletion completed in 6.118210154s • [SLOW TEST:10.607 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:48:38.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:48:38.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-382" for this suite. Aug 11 07:49:00.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:49:00.693: INFO: namespace pods-382 deletion completed in 22.210669921s • [SLOW TEST:22.393 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:49:00.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-78843a69-7134-46c1-9746-44d1f420cade STEP: Creating a pod to test consume configMaps Aug 11 07:49:00.774: INFO: Waiting up to 5m0s for pod "pod-configmaps-f6ce7d1d-c512-4444-89ec-9fc61e53ede0" in namespace "configmap-852" to be "success or failure" Aug 11 07:49:00.804: INFO: Pod "pod-configmaps-f6ce7d1d-c512-4444-89ec-9fc61e53ede0": Phase="Pending", Reason="", readiness=false. Elapsed: 29.501896ms Aug 11 07:49:02.808: INFO: Pod "pod-configmaps-f6ce7d1d-c512-4444-89ec-9fc61e53ede0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033557152s Aug 11 07:49:04.812: INFO: Pod "pod-configmaps-f6ce7d1d-c512-4444-89ec-9fc61e53ede0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037240323s STEP: Saw pod success Aug 11 07:49:04.812: INFO: Pod "pod-configmaps-f6ce7d1d-c512-4444-89ec-9fc61e53ede0" satisfied condition "success or failure" Aug 11 07:49:04.814: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-f6ce7d1d-c512-4444-89ec-9fc61e53ede0 container configmap-volume-test: STEP: delete the pod Aug 11 07:49:04.848: INFO: Waiting for pod pod-configmaps-f6ce7d1d-c512-4444-89ec-9fc61e53ede0 to disappear Aug 11 07:49:04.858: INFO: Pod pod-configmaps-f6ce7d1d-c512-4444-89ec-9fc61e53ede0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:49:04.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-852" for this suite. Aug 11 07:49:10.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:49:10.945: INFO: namespace configmap-852 deletion completed in 6.082915404s • [SLOW TEST:10.251 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:49:10.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 11 07:49:11.637: INFO: Pod name wrapped-volume-race-690eca33-8bff-4b6f-a5ae-e6ffcea7a841: Found 0 pods out of 5 Aug 11 07:49:16.646: INFO: Pod name wrapped-volume-race-690eca33-8bff-4b6f-a5ae-e6ffcea7a841: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-690eca33-8bff-4b6f-a5ae-e6ffcea7a841 in namespace emptydir-wrapper-6707, will wait for the garbage collector to delete the pods Aug 11 07:49:32.771: INFO: Deleting ReplicationController wrapped-volume-race-690eca33-8bff-4b6f-a5ae-e6ffcea7a841 took: 25.687096ms Aug 11 07:49:33.072: INFO: Terminating ReplicationController wrapped-volume-race-690eca33-8bff-4b6f-a5ae-e6ffcea7a841 pods took: 300.235365ms STEP: Creating RC which spawns configmap-volume pods Aug 11 07:50:15.726: INFO: Pod name wrapped-volume-race-0b8d1440-f256-4ff7-8665-3f2aa8797b48: Found 0 pods out of 5 Aug 11 07:50:20.750: INFO: Pod name wrapped-volume-race-0b8d1440-f256-4ff7-8665-3f2aa8797b48: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0b8d1440-f256-4ff7-8665-3f2aa8797b48 in namespace emptydir-wrapper-6707, will wait for the garbage collector to delete the pods Aug 11 07:50:37.166: INFO: Deleting ReplicationController wrapped-volume-race-0b8d1440-f256-4ff7-8665-3f2aa8797b48 took: 17.5776ms Aug 11 07:50:37.467: INFO: Terminating ReplicationController wrapped-volume-race-0b8d1440-f256-4ff7-8665-3f2aa8797b48 pods took: 300.291876ms STEP: Creating RC which spawns configmap-volume pods Aug 11 07:51:16.100: INFO: Pod name wrapped-volume-race-0da0062f-8d99-4806-9d21-3f3a3aa5254e: Found 0 pods out of 5 Aug 11 07:51:21.108: INFO: Pod name wrapped-volume-race-0da0062f-8d99-4806-9d21-3f3a3aa5254e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0da0062f-8d99-4806-9d21-3f3a3aa5254e in namespace emptydir-wrapper-6707, will wait for the garbage collector to delete the pods Aug 11 07:51:37.197: INFO: Deleting ReplicationController wrapped-volume-race-0da0062f-8d99-4806-9d21-3f3a3aa5254e took: 7.580477ms Aug 11 07:51:37.497: INFO: Terminating ReplicationController wrapped-volume-race-0da0062f-8d99-4806-9d21-3f3a3aa5254e pods took: 300.25387ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:52:16.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6707" for this suite. Aug 11 07:52:26.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:52:26.285: INFO: namespace emptydir-wrapper-6707 deletion completed in 10.101421821s • [SLOW TEST:195.340 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:52:26.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-7163/configmap-test-c3ebcdfa-4034-4a24-a642-a6447369e353 STEP: Creating a pod to test consume configMaps Aug 11 07:52:26.419: INFO: Waiting up to 5m0s for pod "pod-configmaps-b4a11ecf-848e-4d73-9940-da344ef2cdd0" in namespace "configmap-7163" to be "success or failure" Aug 11 07:52:26.422: INFO: Pod "pod-configmaps-b4a11ecf-848e-4d73-9940-da344ef2cdd0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.526919ms Aug 11 07:52:28.426: INFO: Pod "pod-configmaps-b4a11ecf-848e-4d73-9940-da344ef2cdd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007112305s Aug 11 07:52:30.431: INFO: Pod "pod-configmaps-b4a11ecf-848e-4d73-9940-da344ef2cdd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011713063s STEP: Saw pod success Aug 11 07:52:30.431: INFO: Pod "pod-configmaps-b4a11ecf-848e-4d73-9940-da344ef2cdd0" satisfied condition "success or failure" Aug 11 07:52:30.434: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b4a11ecf-848e-4d73-9940-da344ef2cdd0 container env-test: STEP: delete the pod Aug 11 07:52:30.578: INFO: Waiting for pod pod-configmaps-b4a11ecf-848e-4d73-9940-da344ef2cdd0 to disappear Aug 11 07:52:30.595: INFO: Pod pod-configmaps-b4a11ecf-848e-4d73-9940-da344ef2cdd0 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:52:30.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7163" for this suite. Aug 11 07:52:36.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:52:36.695: INFO: namespace configmap-7163 deletion completed in 6.09627941s • [SLOW TEST:10.409 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:52:36.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 11 07:52:36.755: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8bf0dfb3-f8b5-46d1-8b5b-993eda9bf938" in namespace "downward-api-9351" to be "success or failure" Aug 11 07:52:36.812: INFO: Pod "downwardapi-volume-8bf0dfb3-f8b5-46d1-8b5b-993eda9bf938": Phase="Pending", Reason="", readiness=false. Elapsed: 57.375539ms Aug 11 07:52:38.815: INFO: Pod "downwardapi-volume-8bf0dfb3-f8b5-46d1-8b5b-993eda9bf938": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060728907s Aug 11 07:52:40.830: INFO: Pod "downwardapi-volume-8bf0dfb3-f8b5-46d1-8b5b-993eda9bf938": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075099452s STEP: Saw pod success Aug 11 07:52:40.830: INFO: Pod "downwardapi-volume-8bf0dfb3-f8b5-46d1-8b5b-993eda9bf938" satisfied condition "success or failure" Aug 11 07:52:40.833: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8bf0dfb3-f8b5-46d1-8b5b-993eda9bf938 container client-container: STEP: delete the pod Aug 11 07:52:40.868: INFO: Waiting for pod downwardapi-volume-8bf0dfb3-f8b5-46d1-8b5b-993eda9bf938 to disappear Aug 11 07:52:40.883: INFO: Pod downwardapi-volume-8bf0dfb3-f8b5-46d1-8b5b-993eda9bf938 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:52:40.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9351" for this suite. Aug 11 07:52:46.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:52:47.081: INFO: namespace downward-api-9351 deletion completed in 6.194617666s • [SLOW TEST:10.385 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:52:47.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 11 07:52:47.156: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8125,SelfLink:/api/v1/namespaces/watch-8125/configmaps/e2e-watch-test-watch-closed,UID:8cecedb7-1c01-4312-8948-293d55ec5bd2,ResourceVersion:4145121,Generation:0,CreationTimestamp:2020-08-11 07:52:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 11 07:52:47.156: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8125,SelfLink:/api/v1/namespaces/watch-8125/configmaps/e2e-watch-test-watch-closed,UID:8cecedb7-1c01-4312-8948-293d55ec5bd2,ResourceVersion:4145122,Generation:0,CreationTimestamp:2020-08-11 07:52:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 11 07:52:47.191: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8125,SelfLink:/api/v1/namespaces/watch-8125/configmaps/e2e-watch-test-watch-closed,UID:8cecedb7-1c01-4312-8948-293d55ec5bd2,ResourceVersion:4145123,Generation:0,CreationTimestamp:2020-08-11 07:52:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 11 07:52:47.191: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8125,SelfLink:/api/v1/namespaces/watch-8125/configmaps/e2e-watch-test-watch-closed,UID:8cecedb7-1c01-4312-8948-293d55ec5bd2,ResourceVersion:4145124,Generation:0,CreationTimestamp:2020-08-11 07:52:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:52:47.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8125" for this suite. Aug 11 07:52:53.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:52:53.289: INFO: namespace watch-8125 deletion completed in 6.092594992s • [SLOW TEST:6.207 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:52:53.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4135d9d5-4c4c-4bc7-b07d-dd12ac2ffa89 STEP: Creating a pod to test consume secrets Aug 11 07:52:53.485: INFO: Waiting up to 5m0s for pod "pod-secrets-c5b30e18-0d65-45c8-a43f-6a53e2790d8a" in namespace "secrets-5508" to be "success or failure" Aug 11 07:52:53.506: INFO: Pod "pod-secrets-c5b30e18-0d65-45c8-a43f-6a53e2790d8a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.011726ms Aug 11 07:52:55.897: INFO: Pod "pod-secrets-c5b30e18-0d65-45c8-a43f-6a53e2790d8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.412621327s Aug 11 07:52:57.902: INFO: Pod "pod-secrets-c5b30e18-0d65-45c8-a43f-6a53e2790d8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.41720298s STEP: Saw pod success Aug 11 07:52:57.902: INFO: Pod "pod-secrets-c5b30e18-0d65-45c8-a43f-6a53e2790d8a" satisfied condition "success or failure" Aug 11 07:52:57.905: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-c5b30e18-0d65-45c8-a43f-6a53e2790d8a container secret-volume-test: STEP: delete the pod Aug 11 07:52:57.929: INFO: Waiting for pod pod-secrets-c5b30e18-0d65-45c8-a43f-6a53e2790d8a to disappear Aug 11 07:52:57.950: INFO: Pod pod-secrets-c5b30e18-0d65-45c8-a43f-6a53e2790d8a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:52:57.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5508" for this suite. Aug 11 07:53:04.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:53:04.155: INFO: namespace secrets-5508 deletion completed in 6.200396108s STEP: Destroying namespace "secret-namespace-1667" for this suite. Aug 11 07:53:10.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:53:10.272: INFO: namespace secret-namespace-1667 deletion completed in 6.116906922s • [SLOW TEST:16.982 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:53:10.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 11 07:53:10.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4159' Aug 11 07:53:10.417: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 11 07:53:10.417: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Aug 11 07:53:10.428: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Aug 11 07:53:10.441: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Aug 11 07:53:10.450: INFO: scanned /root for discovery docs: Aug 11 07:53:10.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4159' Aug 11 07:53:26.379: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Aug 11 07:53:26.379: INFO: stdout: "Created e2e-test-nginx-rc-62f6c7d54d48f4aa714cd1778f2f9d5e\nScaling up e2e-test-nginx-rc-62f6c7d54d48f4aa714cd1778f2f9d5e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-62f6c7d54d48f4aa714cd1778f2f9d5e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-62f6c7d54d48f4aa714cd1778f2f9d5e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Aug 11 07:53:26.379: INFO: stdout: "Created e2e-test-nginx-rc-62f6c7d54d48f4aa714cd1778f2f9d5e\nScaling up e2e-test-nginx-rc-62f6c7d54d48f4aa714cd1778f2f9d5e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-62f6c7d54d48f4aa714cd1778f2f9d5e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-62f6c7d54d48f4aa714cd1778f2f9d5e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Aug 11 07:53:26.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4159' Aug 11 07:53:26.486: INFO: stderr: "" Aug 11 07:53:26.486: INFO: stdout: "e2e-test-nginx-rc-62f6c7d54d48f4aa714cd1778f2f9d5e-9f8gt " Aug 11 07:53:26.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-62f6c7d54d48f4aa714cd1778f2f9d5e-9f8gt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4159' Aug 11 07:53:26.582: INFO: stderr: "" Aug 11 07:53:26.582: INFO: stdout: "true" Aug 11 07:53:26.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-62f6c7d54d48f4aa714cd1778f2f9d5e-9f8gt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4159' Aug 11 07:53:26.668: INFO: stderr: "" Aug 11 07:53:26.668: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Aug 11 07:53:26.668: INFO: e2e-test-nginx-rc-62f6c7d54d48f4aa714cd1778f2f9d5e-9f8gt is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Aug 11 07:53:26.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4159' Aug 11 07:53:26.783: INFO: stderr: "" Aug 11 07:53:26.783: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:53:26.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4159" for this suite. Aug 11 07:53:32.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:53:33.023: INFO: namespace kubectl-4159 deletion completed in 6.130956025s • [SLOW TEST:22.750 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:53:33.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Aug 11 07:53:33.104: INFO: PodSpec: initContainers in spec.initContainers Aug 11 07:54:26.360: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a5da1cce-4b5e-4814-b6d5-46cd628351cd", GenerateName:"", Namespace:"init-container-7478", SelfLink:"/api/v1/namespaces/init-container-7478/pods/pod-init-a5da1cce-4b5e-4814-b6d5-46cd628351cd", UID:"ea29751b-fd8b-4046-a213-3bc364094874", ResourceVersion:"4145587", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63732729213, loc:(*time.Location)(0x7eb18c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"104104065"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-p47ts", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002650180), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p47ts", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p47ts", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p47ts", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002892288), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002664300), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002892310)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002892330)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002892338), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00289233c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732729213, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732729213, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732729213, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732729213, loc:(*time.Location)(0x7eb18c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"10.244.2.228", StartTime:(*v1.Time)(0xc0014262e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000aba150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000aba2a0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://46cf64a9141922eb4456da38e39d3abc54b4b0bdf02897e65ce0864748c96b15"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001426360), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001426320), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:54:26.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7478" for this suite. Aug 11 07:54:48.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:54:48.538: INFO: namespace init-container-7478 deletion completed in 22.144079043s • [SLOW TEST:75.515 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:54:48.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2934.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2934.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2934.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2934.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 11 07:54:54.666: INFO: DNS probes using dns-test-7e11f30b-98ef-40d8-b78c-8a615a9837b6 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2934.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2934.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2934.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2934.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 11 07:55:02.803: INFO: File wheezy_udp@dns-test-service-3.dns-2934.svc.cluster.local from pod dns-2934/dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 07:55:02.807: INFO: File jessie_udp@dns-test-service-3.dns-2934.svc.cluster.local from pod dns-2934/dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 07:55:02.807: INFO: Lookups using dns-2934/dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 failed for: [wheezy_udp@dns-test-service-3.dns-2934.svc.cluster.local jessie_udp@dns-test-service-3.dns-2934.svc.cluster.local] Aug 11 07:55:07.813: INFO: File wheezy_udp@dns-test-service-3.dns-2934.svc.cluster.local from pod dns-2934/dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 07:55:07.816: INFO: File jessie_udp@dns-test-service-3.dns-2934.svc.cluster.local from pod dns-2934/dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 07:55:07.817: INFO: Lookups using dns-2934/dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 failed for: [wheezy_udp@dns-test-service-3.dns-2934.svc.cluster.local jessie_udp@dns-test-service-3.dns-2934.svc.cluster.local] Aug 11 07:55:12.822: INFO: File wheezy_udp@dns-test-service-3.dns-2934.svc.cluster.local from pod dns-2934/dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 07:55:12.827: INFO: File jessie_udp@dns-test-service-3.dns-2934.svc.cluster.local from pod dns-2934/dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 07:55:12.827: INFO: Lookups using dns-2934/dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 failed for: [wheezy_udp@dns-test-service-3.dns-2934.svc.cluster.local jessie_udp@dns-test-service-3.dns-2934.svc.cluster.local] Aug 11 07:55:17.812: INFO: File wheezy_udp@dns-test-service-3.dns-2934.svc.cluster.local from pod dns-2934/dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 07:55:17.815: INFO: File jessie_udp@dns-test-service-3.dns-2934.svc.cluster.local from pod dns-2934/dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 07:55:17.815: INFO: Lookups using dns-2934/dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 failed for: [wheezy_udp@dns-test-service-3.dns-2934.svc.cluster.local jessie_udp@dns-test-service-3.dns-2934.svc.cluster.local] Aug 11 07:55:22.812: INFO: File wheezy_udp@dns-test-service-3.dns-2934.svc.cluster.local from pod dns-2934/dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 07:55:22.819: INFO: File jessie_udp@dns-test-service-3.dns-2934.svc.cluster.local from pod dns-2934/dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 11 07:55:22.819: INFO: Lookups using dns-2934/dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 failed for: [wheezy_udp@dns-test-service-3.dns-2934.svc.cluster.local jessie_udp@dns-test-service-3.dns-2934.svc.cluster.local] Aug 11 07:55:27.815: INFO: DNS probes using dns-test-51dc98c3-530f-44f7-abc8-468f60b3c3d6 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2934.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2934.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2934.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2934.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 11 07:55:34.526: INFO: DNS probes using dns-test-583c7963-421d-4d35-948a-7ccd5ecbe485 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:55:34.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2934" for this suite. Aug 11 07:55:40.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:55:41.088: INFO: namespace dns-2934 deletion completed in 6.194493768s • [SLOW TEST:52.550 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:55:41.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-37d4e8b0-ce63-4db9-bf35-53cdfbf3e50a in namespace container-probe-5652 Aug 11 07:55:45.216: INFO: Started pod liveness-37d4e8b0-ce63-4db9-bf35-53cdfbf3e50a in namespace container-probe-5652 STEP: checking the pod's current state and verifying that restartCount is present Aug 11 07:55:45.218: INFO: Initial restart count of pod liveness-37d4e8b0-ce63-4db9-bf35-53cdfbf3e50a is 0 Aug 11 07:56:03.279: INFO: Restart count of pod container-probe-5652/liveness-37d4e8b0-ce63-4db9-bf35-53cdfbf3e50a is now 1 (18.060463136s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:56:03.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5652" for this suite. Aug 11 07:56:09.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:56:09.421: INFO: namespace container-probe-5652 deletion completed in 6.082675613s • [SLOW TEST:28.332 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:56:09.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 11 07:56:09.507: INFO: Waiting up to 5m0s for pod "pod-02895945-5b45-4692-9dfd-c81bed4f9f15" in namespace "emptydir-4851" to be "success or failure" Aug 11 07:56:09.526: INFO: Pod "pod-02895945-5b45-4692-9dfd-c81bed4f9f15": Phase="Pending", Reason="", readiness=false. Elapsed: 18.766318ms Aug 11 07:56:11.531: INFO: Pod "pod-02895945-5b45-4692-9dfd-c81bed4f9f15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023368369s Aug 11 07:56:13.534: INFO: Pod "pod-02895945-5b45-4692-9dfd-c81bed4f9f15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027282575s STEP: Saw pod success Aug 11 07:56:13.535: INFO: Pod "pod-02895945-5b45-4692-9dfd-c81bed4f9f15" satisfied condition "success or failure" Aug 11 07:56:13.537: INFO: Trying to get logs from node iruya-worker pod pod-02895945-5b45-4692-9dfd-c81bed4f9f15 container test-container: STEP: delete the pod Aug 11 07:56:13.560: INFO: Waiting for pod pod-02895945-5b45-4692-9dfd-c81bed4f9f15 to disappear Aug 11 07:56:13.577: INFO: Pod pod-02895945-5b45-4692-9dfd-c81bed4f9f15 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:56:13.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4851" for this suite. Aug 11 07:56:19.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:56:19.754: INFO: namespace emptydir-4851 deletion completed in 6.17349051s • [SLOW TEST:10.332 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:56:19.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-cca0c774-6478-4cc8-a128-259f6b613b19 STEP: Creating secret with name s-test-opt-upd-77d95f18-eb76-44aa-9aad-7e653e65f0f4 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-cca0c774-6478-4cc8-a128-259f6b613b19 STEP: Updating secret s-test-opt-upd-77d95f18-eb76-44aa-9aad-7e653e65f0f4 STEP: Creating secret with name s-test-opt-create-7a0b6771-f336-4a91-aa49-8ed3a90feaab STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:56:28.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1878" for this suite. Aug 11 07:56:50.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:56:50.158: INFO: namespace secrets-1878 deletion completed in 22.089319889s • [SLOW TEST:30.403 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:56:50.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-244b529d-3da1-44b9-b9f6-b5ee8408469b STEP: Creating a pod to test consume configMaps Aug 11 07:56:50.275: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-930e8e54-8049-4679-a617-3e37595f68ad" in namespace "projected-6536" to be "success or failure" Aug 11 07:56:50.278: INFO: Pod "pod-projected-configmaps-930e8e54-8049-4679-a617-3e37595f68ad": Phase="Pending", Reason="", readiness=false. Elapsed: 3.799939ms Aug 11 07:56:52.306: INFO: Pod "pod-projected-configmaps-930e8e54-8049-4679-a617-3e37595f68ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031179628s Aug 11 07:56:54.309: INFO: Pod "pod-projected-configmaps-930e8e54-8049-4679-a617-3e37595f68ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034468293s STEP: Saw pod success Aug 11 07:56:54.309: INFO: Pod "pod-projected-configmaps-930e8e54-8049-4679-a617-3e37595f68ad" satisfied condition "success or failure" Aug 11 07:56:54.311: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-930e8e54-8049-4679-a617-3e37595f68ad container projected-configmap-volume-test: STEP: delete the pod Aug 11 07:56:54.346: INFO: Waiting for pod pod-projected-configmaps-930e8e54-8049-4679-a617-3e37595f68ad to disappear Aug 11 07:56:54.363: INFO: Pod pod-projected-configmaps-930e8e54-8049-4679-a617-3e37595f68ad no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:56:54.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6536" for this suite. Aug 11 07:57:00.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:57:00.440: INFO: namespace projected-6536 deletion completed in 6.074287595s • [SLOW TEST:10.282 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:57:00.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Aug 11 07:57:00.532: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9420" to be "success or failure" Aug 11 07:57:00.547: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.78512ms Aug 11 07:57:02.551: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018758904s Aug 11 07:57:04.556: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023796065s Aug 11 07:57:06.560: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027713105s STEP: Saw pod success Aug 11 07:57:06.560: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Aug 11 07:57:06.563: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Aug 11 07:57:06.585: INFO: Waiting for pod pod-host-path-test to disappear Aug 11 07:57:06.589: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:57:06.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9420" for this suite. Aug 11 07:57:12.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:57:12.683: INFO: namespace hostpath-9420 deletion completed in 6.089796577s • [SLOW TEST:12.242 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:57:12.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 11 07:57:12.786: INFO: Create a RollingUpdate DaemonSet Aug 11 07:57:12.790: INFO: Check that daemon pods launch on every node of the cluster Aug 11 07:57:12.800: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 07:57:12.822: INFO: Number of nodes with available pods: 0 Aug 11 07:57:12.822: INFO: Node iruya-worker is running more than one daemon pod Aug 11 07:57:13.827: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 07:57:13.831: INFO: Number of nodes with available pods: 0 Aug 11 07:57:13.831: INFO: Node iruya-worker is running more than one daemon pod Aug 11 07:57:14.826: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 07:57:14.828: INFO: Number of nodes with available pods: 0 Aug 11 07:57:14.828: INFO: Node iruya-worker is running more than one daemon pod Aug 11 07:57:15.937: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 07:57:15.940: INFO: Number of nodes with available pods: 0 Aug 11 07:57:15.940: INFO: Node iruya-worker is running more than one daemon pod Aug 11 07:57:16.840: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 07:57:16.843: INFO: Number of nodes with available pods: 1 Aug 11 07:57:16.843: INFO: Node iruya-worker2 is running more than one daemon pod Aug 11 07:57:17.827: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 07:57:17.831: INFO: Number of nodes with available pods: 2 Aug 11 07:57:17.831: INFO: Number of running nodes: 2, number of available pods: 2 Aug 11 07:57:17.831: INFO: Update the DaemonSet to trigger a rollout Aug 11 07:57:17.838: INFO: Updating DaemonSet daemon-set Aug 11 07:57:26.035: INFO: Roll back the DaemonSet before rollout is complete Aug 11 07:57:26.042: INFO: Updating DaemonSet daemon-set Aug 11 07:57:26.042: INFO: Make sure DaemonSet rollback is complete Aug 11 07:57:26.079: INFO: Wrong image for pod: daemon-set-hcdk2. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Aug 11 07:57:26.079: INFO: Pod daemon-set-hcdk2 is not available Aug 11 07:57:26.094: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 07:57:27.109: INFO: Wrong image for pod: daemon-set-hcdk2. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Aug 11 07:57:27.109: INFO: Pod daemon-set-hcdk2 is not available Aug 11 07:57:27.114: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 11 07:57:28.098: INFO: Pod daemon-set-s8k6s is not available Aug 11 07:57:28.102: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4883, will wait for the garbage collector to delete the pods Aug 11 07:57:28.168: INFO: Deleting DaemonSet.extensions daemon-set took: 5.904726ms Aug 11 07:57:28.268: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.251437ms Aug 11 07:57:31.771: INFO: Number of nodes with available pods: 0 Aug 11 07:57:31.771: INFO: Number of running nodes: 0, number of available pods: 0 Aug 11 07:57:31.776: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4883/daemonsets","resourceVersion":"4146540"},"items":null} Aug 11 07:57:31.778: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4883/pods","resourceVersion":"4146540"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:57:31.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4883" for this suite. Aug 11 07:57:37.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:57:37.898: INFO: namespace daemonsets-4883 deletion completed in 6.10743108s • [SLOW TEST:25.215 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:57:37.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7952 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7952 STEP: Creating statefulset with conflicting port in namespace statefulset-7952 STEP: Waiting until pod test-pod will start running in namespace statefulset-7952 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7952 Aug 11 07:57:44.115: INFO: Observed stateful pod in namespace: statefulset-7952, name: ss-0, uid: 6965bfa5-fa1c-4c13-ab2c-dd39c7b27646, status phase: Pending. Waiting for statefulset controller to delete. Aug 11 07:57:44.243: INFO: Observed stateful pod in namespace: statefulset-7952, name: ss-0, uid: 6965bfa5-fa1c-4c13-ab2c-dd39c7b27646, status phase: Failed. Waiting for statefulset controller to delete. Aug 11 07:57:44.252: INFO: Observed stateful pod in namespace: statefulset-7952, name: ss-0, uid: 6965bfa5-fa1c-4c13-ab2c-dd39c7b27646, status phase: Failed. Waiting for statefulset controller to delete. Aug 11 07:57:44.269: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7952 STEP: Removing pod with conflicting port in namespace statefulset-7952 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7952 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Aug 11 07:57:50.498: INFO: Deleting all statefulset in ns statefulset-7952 Aug 11 07:57:50.501: INFO: Scaling statefulset ss to 0 Aug 11 07:58:00.518: INFO: Waiting for statefulset status.replicas updated to 0 Aug 11 07:58:00.520: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:58:00.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7952" for this suite. Aug 11 07:58:06.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:58:06.732: INFO: namespace statefulset-7952 deletion completed in 6.118476091s • [SLOW TEST:28.833 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:58:06.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Aug 11 07:58:13.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-0eaeda08-4ee7-4064-9179-4a61022d9cb7 -c busybox-main-container --namespace=emptydir-3253 -- cat /usr/share/volumeshare/shareddata.txt' Aug 11 07:58:13.218: INFO: stderr: "I0811 07:58:13.131158 197 log.go:172] (0xc00054e630) (0xc0005aae60) Create stream\nI0811 07:58:13.131218 197 log.go:172] (0xc00054e630) (0xc0005aae60) Stream added, broadcasting: 1\nI0811 07:58:13.138592 197 log.go:172] (0xc00054e630) Reply frame received for 1\nI0811 07:58:13.138638 197 log.go:172] (0xc00054e630) (0xc0009fc000) Create stream\nI0811 07:58:13.138661 197 log.go:172] (0xc00054e630) (0xc0009fc000) Stream added, broadcasting: 3\nI0811 07:58:13.139480 197 log.go:172] (0xc00054e630) Reply frame received for 3\nI0811 07:58:13.139518 197 log.go:172] (0xc00054e630) (0xc0005aaf00) Create stream\nI0811 07:58:13.139531 197 log.go:172] (0xc00054e630) (0xc0005aaf00) Stream added, broadcasting: 5\nI0811 07:58:13.140476 197 log.go:172] (0xc00054e630) Reply frame received for 5\nI0811 07:58:13.210525 197 log.go:172] (0xc00054e630) Data frame received for 5\nI0811 07:58:13.210965 197 log.go:172] (0xc0005aaf00) (5) Data frame handling\nI0811 07:58:13.211001 197 log.go:172] (0xc00054e630) Data frame received for 3\nI0811 07:58:13.211014 197 log.go:172] (0xc0009fc000) (3) Data frame handling\nI0811 07:58:13.211031 197 log.go:172] (0xc0009fc000) (3) Data frame sent\nI0811 07:58:13.211037 197 log.go:172] (0xc00054e630) Data frame received for 3\nI0811 07:58:13.211041 197 log.go:172] (0xc0009fc000) (3) Data frame handling\nI0811 07:58:13.212237 197 log.go:172] (0xc00054e630) Data frame received for 1\nI0811 07:58:13.212261 197 log.go:172] (0xc0005aae60) (1) Data frame handling\nI0811 07:58:13.212277 197 log.go:172] (0xc0005aae60) (1) Data frame sent\nI0811 07:58:13.212505 197 log.go:172] (0xc00054e630) (0xc0005aae60) Stream removed, broadcasting: 1\nI0811 07:58:13.212571 197 log.go:172] (0xc00054e630) Go away received\nI0811 07:58:13.212867 197 log.go:172] (0xc00054e630) (0xc0005aae60) Stream removed, broadcasting: 1\nI0811 07:58:13.212886 197 log.go:172] (0xc00054e630) (0xc0009fc000) Stream removed, broadcasting: 3\nI0811 07:58:13.212893 197 log.go:172] (0xc00054e630) (0xc0005aaf00) Stream removed, broadcasting: 5\n" Aug 11 07:58:13.218: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:58:13.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3253" for this suite. Aug 11 07:58:21.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:58:21.445: INFO: namespace emptydir-3253 deletion completed in 8.223063482s • [SLOW TEST:14.714 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:58:21.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 11 07:58:21.582: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc9986a5-b224-4ac0-ae4c-abd1dfc47dee" in namespace "projected-1514" to be "success or failure" Aug 11 07:58:21.585: INFO: Pod "downwardapi-volume-bc9986a5-b224-4ac0-ae4c-abd1dfc47dee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.093823ms Aug 11 07:58:23.590: INFO: Pod "downwardapi-volume-bc9986a5-b224-4ac0-ae4c-abd1dfc47dee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007352867s Aug 11 07:58:25.593: INFO: Pod "downwardapi-volume-bc9986a5-b224-4ac0-ae4c-abd1dfc47dee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011165809s STEP: Saw pod success Aug 11 07:58:25.594: INFO: Pod "downwardapi-volume-bc9986a5-b224-4ac0-ae4c-abd1dfc47dee" satisfied condition "success or failure" Aug 11 07:58:25.596: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-bc9986a5-b224-4ac0-ae4c-abd1dfc47dee container client-container: STEP: delete the pod Aug 11 07:58:25.629: INFO: Waiting for pod downwardapi-volume-bc9986a5-b224-4ac0-ae4c-abd1dfc47dee to disappear Aug 11 07:58:25.699: INFO: Pod downwardapi-volume-bc9986a5-b224-4ac0-ae4c-abd1dfc47dee no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:58:25.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1514" for this suite. Aug 11 07:58:31.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:58:31.839: INFO: namespace projected-1514 deletion completed in 6.133567403s • [SLOW TEST:10.392 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:58:31.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0811 07:58:41.952274 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 11 07:58:41.952: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:58:41.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6495" for this suite. Aug 11 07:58:49.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:58:50.051: INFO: namespace gc-6495 deletion completed in 8.095701375s • [SLOW TEST:18.211 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:58:50.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a5e0e812-0b24-49da-98ba-7eedf57084cb STEP: Creating a pod to test consume secrets Aug 11 07:58:50.160: INFO: Waiting up to 5m0s for pod "pod-secrets-7891198f-8b37-4325-8193-af5278d2d32a" in namespace "secrets-1497" to be "success or failure" Aug 11 07:58:50.166: INFO: Pod "pod-secrets-7891198f-8b37-4325-8193-af5278d2d32a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.36095ms Aug 11 07:58:52.171: INFO: Pod "pod-secrets-7891198f-8b37-4325-8193-af5278d2d32a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01106538s Aug 11 07:58:54.175: INFO: Pod "pod-secrets-7891198f-8b37-4325-8193-af5278d2d32a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015081609s STEP: Saw pod success Aug 11 07:58:54.175: INFO: Pod "pod-secrets-7891198f-8b37-4325-8193-af5278d2d32a" satisfied condition "success or failure" Aug 11 07:58:54.178: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-7891198f-8b37-4325-8193-af5278d2d32a container secret-volume-test: STEP: delete the pod Aug 11 07:58:54.298: INFO: Waiting for pod pod-secrets-7891198f-8b37-4325-8193-af5278d2d32a to disappear Aug 11 07:58:54.310: INFO: Pod pod-secrets-7891198f-8b37-4325-8193-af5278d2d32a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:58:54.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1497" for this suite. Aug 11 07:59:00.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:59:00.407: INFO: namespace secrets-1497 deletion completed in 6.092495928s • [SLOW TEST:10.356 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:59:00.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-207ae16c-b231-4c37-a7ab-330d8f96b69e STEP: Creating a pod to test consume configMaps Aug 11 07:59:00.569: INFO: Waiting up to 5m0s for pod "pod-configmaps-baa4ff26-4941-4b71-b539-9249283f9fe6" in namespace "configmap-1380" to be "success or failure" Aug 11 07:59:00.574: INFO: Pod "pod-configmaps-baa4ff26-4941-4b71-b539-9249283f9fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.445652ms Aug 11 07:59:02.858: INFO: Pod "pod-configmaps-baa4ff26-4941-4b71-b539-9249283f9fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28971512s Aug 11 07:59:04.863: INFO: Pod "pod-configmaps-baa4ff26-4941-4b71-b539-9249283f9fe6": Phase="Running", Reason="", readiness=true. Elapsed: 4.294373343s Aug 11 07:59:06.868: INFO: Pod "pod-configmaps-baa4ff26-4941-4b71-b539-9249283f9fe6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.299079865s STEP: Saw pod success Aug 11 07:59:06.868: INFO: Pod "pod-configmaps-baa4ff26-4941-4b71-b539-9249283f9fe6" satisfied condition "success or failure" Aug 11 07:59:06.871: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-baa4ff26-4941-4b71-b539-9249283f9fe6 container configmap-volume-test: STEP: delete the pod Aug 11 07:59:06.908: INFO: Waiting for pod pod-configmaps-baa4ff26-4941-4b71-b539-9249283f9fe6 to disappear Aug 11 07:59:06.959: INFO: Pod pod-configmaps-baa4ff26-4941-4b71-b539-9249283f9fe6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:59:06.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1380" for this suite. Aug 11 07:59:13.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:59:13.088: INFO: namespace configmap-1380 deletion completed in 6.083972616s • [SLOW TEST:12.681 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:59:13.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:59:17.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7093" for this suite. Aug 11 07:59:23.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:59:23.410: INFO: namespace emptydir-wrapper-7093 deletion completed in 6.114345878s • [SLOW TEST:10.322 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:59:23.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ee00384a-359f-48ad-a712-b64c8aed89f3 STEP: Creating a pod to test consume configMaps Aug 11 07:59:23.582: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b6d1c585-83a2-44c8-96d8-63c34bc89249" in namespace "projected-6386" to be "success or failure" Aug 11 07:59:23.604: INFO: Pod "pod-projected-configmaps-b6d1c585-83a2-44c8-96d8-63c34bc89249": Phase="Pending", Reason="", readiness=false. Elapsed: 22.025861ms Aug 11 07:59:25.620: INFO: Pod "pod-projected-configmaps-b6d1c585-83a2-44c8-96d8-63c34bc89249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037802592s Aug 11 07:59:27.628: INFO: Pod "pod-projected-configmaps-b6d1c585-83a2-44c8-96d8-63c34bc89249": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045836106s Aug 11 07:59:29.633: INFO: Pod "pod-projected-configmaps-b6d1c585-83a2-44c8-96d8-63c34bc89249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05082503s STEP: Saw pod success Aug 11 07:59:29.633: INFO: Pod "pod-projected-configmaps-b6d1c585-83a2-44c8-96d8-63c34bc89249" satisfied condition "success or failure" Aug 11 07:59:29.636: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-b6d1c585-83a2-44c8-96d8-63c34bc89249 container projected-configmap-volume-test: STEP: delete the pod Aug 11 07:59:29.662: INFO: Waiting for pod pod-projected-configmaps-b6d1c585-83a2-44c8-96d8-63c34bc89249 to disappear Aug 11 07:59:29.666: INFO: Pod pod-projected-configmaps-b6d1c585-83a2-44c8-96d8-63c34bc89249 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 07:59:29.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6386" for this suite. Aug 11 07:59:35.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 07:59:35.781: INFO: namespace projected-6386 deletion completed in 6.109938726s • [SLOW TEST:12.371 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 07:59:35.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-bzb7 STEP: Creating a pod to test atomic-volume-subpath Aug 11 07:59:35.967: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bzb7" in namespace "subpath-4846" to be "success or failure" Aug 11 07:59:35.972: INFO: Pod "pod-subpath-test-configmap-bzb7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.499414ms Aug 11 07:59:37.977: INFO: Pod "pod-subpath-test-configmap-bzb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009807258s Aug 11 07:59:39.981: INFO: Pod "pod-subpath-test-configmap-bzb7": Phase="Running", Reason="", readiness=true. Elapsed: 4.014269788s Aug 11 07:59:41.985: INFO: Pod "pod-subpath-test-configmap-bzb7": Phase="Running", Reason="", readiness=true. Elapsed: 6.018401486s Aug 11 07:59:43.990: INFO: Pod "pod-subpath-test-configmap-bzb7": Phase="Running", Reason="", readiness=true. Elapsed: 8.02288529s Aug 11 07:59:45.994: INFO: Pod "pod-subpath-test-configmap-bzb7": Phase="Running", Reason="", readiness=true. Elapsed: 10.027374631s Aug 11 07:59:47.999: INFO: Pod "pod-subpath-test-configmap-bzb7": Phase="Running", Reason="", readiness=true. Elapsed: 12.031895863s Aug 11 07:59:50.004: INFO: Pod "pod-subpath-test-configmap-bzb7": Phase="Running", Reason="", readiness=true. Elapsed: 14.036579968s Aug 11 07:59:52.008: INFO: Pod "pod-subpath-test-configmap-bzb7": Phase="Running", Reason="", readiness=true. Elapsed: 16.040804814s Aug 11 07:59:54.013: INFO: Pod "pod-subpath-test-configmap-bzb7": Phase="Running", Reason="", readiness=true. Elapsed: 18.045670252s Aug 11 07:59:56.017: INFO: Pod "pod-subpath-test-configmap-bzb7": Phase="Running", Reason="", readiness=true. Elapsed: 20.049906359s Aug 11 07:59:58.021: INFO: Pod "pod-subpath-test-configmap-bzb7": Phase="Running", Reason="", readiness=true. Elapsed: 22.054151216s Aug 11 08:00:00.026: INFO: Pod "pod-subpath-test-configmap-bzb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.058857485s STEP: Saw pod success Aug 11 08:00:00.026: INFO: Pod "pod-subpath-test-configmap-bzb7" satisfied condition "success or failure" Aug 11 08:00:00.029: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-bzb7 container test-container-subpath-configmap-bzb7: STEP: delete the pod Aug 11 08:00:00.050: INFO: Waiting for pod pod-subpath-test-configmap-bzb7 to disappear Aug 11 08:00:00.080: INFO: Pod pod-subpath-test-configmap-bzb7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-bzb7 Aug 11 08:00:00.080: INFO: Deleting pod "pod-subpath-test-configmap-bzb7" in namespace "subpath-4846" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 08:00:00.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4846" for this suite. Aug 11 08:00:06.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 08:00:06.261: INFO: namespace subpath-4846 deletion completed in 6.17329073s • [SLOW TEST:30.479 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 08:00:06.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 08:00:06.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6501" for this suite. Aug 11 08:00:12.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 08:00:12.571: INFO: namespace services-6501 deletion completed in 6.235404477s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.309 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 08:00:12.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6407 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 11 08:00:12.648: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 11 08:00:40.774: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.70:8080/dial?request=hostName&protocol=http&host=10.244.2.242&port=8080&tries=1'] Namespace:pod-network-test-6407 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 08:00:40.774: INFO: >>> kubeConfig: /root/.kube/config I0811 08:00:40.803922 6 log.go:172] (0xc001398210) (0xc001bf4be0) Create stream I0811 08:00:40.803978 6 log.go:172] (0xc001398210) (0xc001bf4be0) Stream added, broadcasting: 1 I0811 08:00:40.806566 6 log.go:172] (0xc001398210) Reply frame received for 1 I0811 08:00:40.806614 6 log.go:172] (0xc001398210) (0xc0016ee000) Create stream I0811 08:00:40.806629 6 log.go:172] (0xc001398210) (0xc0016ee000) Stream added, broadcasting: 3 I0811 08:00:40.807507 6 log.go:172] (0xc001398210) Reply frame received for 3 I0811 08:00:40.807544 6 log.go:172] (0xc001398210) (0xc001bf4d20) Create stream I0811 08:00:40.807571 6 log.go:172] (0xc001398210) (0xc001bf4d20) Stream added, broadcasting: 5 I0811 08:00:40.808567 6 log.go:172] (0xc001398210) Reply frame received for 5 I0811 08:00:40.914558 6 log.go:172] (0xc001398210) Data frame received for 3 I0811 08:00:40.914589 6 log.go:172] (0xc0016ee000) (3) Data frame handling I0811 08:00:40.914613 6 log.go:172] (0xc0016ee000) (3) Data frame sent I0811 08:00:40.915219 6 log.go:172] (0xc001398210) Data frame received for 3 I0811 08:00:40.915260 6 log.go:172] (0xc0016ee000) (3) Data frame handling I0811 08:00:40.915487 6 log.go:172] (0xc001398210) Data frame received for 5 I0811 08:00:40.915503 6 log.go:172] (0xc001bf4d20) (5) Data frame handling I0811 08:00:40.916841 6 log.go:172] (0xc001398210) Data frame received for 1 I0811 08:00:40.916859 6 log.go:172] (0xc001bf4be0) (1) Data frame handling I0811 08:00:40.916865 6 log.go:172] (0xc001bf4be0) (1) Data frame sent I0811 08:00:40.916873 6 log.go:172] (0xc001398210) (0xc001bf4be0) Stream removed, broadcasting: 1 I0811 08:00:40.916887 6 log.go:172] (0xc001398210) Go away received I0811 08:00:40.917061 6 log.go:172] (0xc001398210) (0xc001bf4be0) Stream removed, broadcasting: 1 I0811 08:00:40.917075 6 log.go:172] (0xc001398210) (0xc0016ee000) Stream removed, broadcasting: 3 I0811 08:00:40.917080 6 log.go:172] (0xc001398210) (0xc001bf4d20) Stream removed, broadcasting: 5 Aug 11 08:00:40.917: INFO: Waiting for endpoints: map[] Aug 11 08:00:40.920: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.70:8080/dial?request=hostName&protocol=http&host=10.244.1.68&port=8080&tries=1'] Namespace:pod-network-test-6407 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 08:00:40.920: INFO: >>> kubeConfig: /root/.kube/config I0811 08:00:40.949910 6 log.go:172] (0xc000a71970) (0xc0032f79a0) Create stream I0811 08:00:40.949934 6 log.go:172] (0xc000a71970) (0xc0032f79a0) Stream added, broadcasting: 1 I0811 08:00:40.955333 6 log.go:172] (0xc000a71970) Reply frame received for 1 I0811 08:00:40.955375 6 log.go:172] (0xc000a71970) (0xc002905ae0) Create stream I0811 08:00:40.955388 6 log.go:172] (0xc000a71970) (0xc002905ae0) Stream added, broadcasting: 3 I0811 08:00:40.956980 6 log.go:172] (0xc000a71970) Reply frame received for 3 I0811 08:00:40.957034 6 log.go:172] (0xc000a71970) (0xc0030a8f00) Create stream I0811 08:00:40.957049 6 log.go:172] (0xc000a71970) (0xc0030a8f00) Stream added, broadcasting: 5 I0811 08:00:40.958177 6 log.go:172] (0xc000a71970) Reply frame received for 5 I0811 08:00:41.029736 6 log.go:172] (0xc000a71970) Data frame received for 3 I0811 08:00:41.029760 6 log.go:172] (0xc002905ae0) (3) Data frame handling I0811 08:00:41.029775 6 log.go:172] (0xc002905ae0) (3) Data frame sent I0811 08:00:41.030397 6 log.go:172] (0xc000a71970) Data frame received for 5 I0811 08:00:41.030413 6 log.go:172] (0xc0030a8f00) (5) Data frame handling I0811 08:00:41.030595 6 log.go:172] (0xc000a71970) Data frame received for 3 I0811 08:00:41.030607 6 log.go:172] (0xc002905ae0) (3) Data frame handling I0811 08:00:41.032249 6 log.go:172] (0xc000a71970) Data frame received for 1 I0811 08:00:41.032273 6 log.go:172] (0xc0032f79a0) (1) Data frame handling I0811 08:00:41.032305 6 log.go:172] (0xc0032f79a0) (1) Data frame sent I0811 08:00:41.032322 6 log.go:172] (0xc000a71970) (0xc0032f79a0) Stream removed, broadcasting: 1 I0811 08:00:41.032428 6 log.go:172] (0xc000a71970) (0xc0032f79a0) Stream removed, broadcasting: 1 I0811 08:00:41.032442 6 log.go:172] (0xc000a71970) (0xc002905ae0) Stream removed, broadcasting: 3 I0811 08:00:41.032638 6 log.go:172] (0xc000a71970) Go away received I0811 08:00:41.032676 6 log.go:172] (0xc000a71970) (0xc0030a8f00) Stream removed, broadcasting: 5 Aug 11 08:00:41.032: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 08:00:41.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6407" for this suite. Aug 11 08:00:59.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 08:00:59.172: INFO: namespace pod-network-test-6407 deletion completed in 18.134972086s • [SLOW TEST:46.600 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 08:00:59.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 11 08:01:09.337: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1766 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 08:01:09.337: INFO: >>> kubeConfig: /root/.kube/config I0811 08:01:09.412836 6 log.go:172] (0xc0015f46e0) (0xc001fb0280) Create stream I0811 08:01:09.412880 6 log.go:172] (0xc0015f46e0) (0xc001fb0280) Stream added, broadcasting: 1 I0811 08:01:09.415523 6 log.go:172] (0xc0015f46e0) Reply frame received for 1 I0811 08:01:09.415570 6 log.go:172] (0xc0015f46e0) (0xc001fb0320) Create stream I0811 08:01:09.415596 6 log.go:172] (0xc0015f46e0) (0xc001fb0320) Stream added, broadcasting: 3 I0811 08:01:09.416424 6 log.go:172] (0xc0015f46e0) Reply frame received for 3 I0811 08:01:09.416453 6 log.go:172] (0xc0015f46e0) (0xc001fb0460) Create stream I0811 08:01:09.416462 6 log.go:172] (0xc0015f46e0) (0xc001fb0460) Stream added, broadcasting: 5 I0811 08:01:09.417476 6 log.go:172] (0xc0015f46e0) Reply frame received for 5 I0811 08:01:09.514837 6 log.go:172] (0xc0015f46e0) Data frame received for 5 I0811 08:01:09.514883 6 log.go:172] (0xc0015f46e0) Data frame received for 3 I0811 08:01:09.514928 6 log.go:172] (0xc001fb0320) (3) Data frame handling I0811 08:01:09.514957 6 log.go:172] (0xc001fb0320) (3) Data frame sent I0811 08:01:09.514973 6 log.go:172] (0xc0015f46e0) Data frame received for 3 I0811 08:01:09.514990 6 log.go:172] (0xc001fb0320) (3) Data frame handling I0811 08:01:09.515089 6 log.go:172] (0xc001fb0460) (5) Data frame handling I0811 08:01:09.516619 6 log.go:172] (0xc0015f46e0) Data frame received for 1 I0811 08:01:09.516655 6 log.go:172] (0xc001fb0280) (1) Data frame handling I0811 08:01:09.516688 6 log.go:172] (0xc001fb0280) (1) Data frame sent I0811 08:01:09.516821 6 log.go:172] (0xc0015f46e0) (0xc001fb0280) Stream removed, broadcasting: 1 I0811 08:01:09.516878 6 log.go:172] (0xc0015f46e0) Go away received I0811 08:01:09.517063 6 log.go:172] (0xc0015f46e0) (0xc001fb0280) Stream removed, broadcasting: 1 I0811 08:01:09.517100 6 log.go:172] (0xc0015f46e0) (0xc001fb0320) Stream removed, broadcasting: 3 I0811 08:01:09.517129 6 log.go:172] (0xc0015f46e0) (0xc001fb0460) Stream removed, broadcasting: 5 Aug 11 08:01:09.517: INFO: Exec stderr: "" Aug 11 08:01:09.517: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1766 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 08:01:09.517: INFO: >>> kubeConfig: /root/.kube/config I0811 08:01:09.552085 6 log.go:172] (0xc00205ed10) (0xc002edf4a0) Create stream I0811 08:01:09.552112 6 log.go:172] (0xc00205ed10) (0xc002edf4a0) Stream added, broadcasting: 1 I0811 08:01:09.554705 6 log.go:172] (0xc00205ed10) Reply frame received for 1 I0811 08:01:09.554738 6 log.go:172] (0xc00205ed10) (0xc0030a4a00) Create stream I0811 08:01:09.554749 6 log.go:172] (0xc00205ed10) (0xc0030a4a00) Stream added, broadcasting: 3 I0811 08:01:09.555694 6 log.go:172] (0xc00205ed10) Reply frame received for 3 I0811 08:01:09.555733 6 log.go:172] (0xc00205ed10) (0xc0030a4aa0) Create stream I0811 08:01:09.555748 6 log.go:172] (0xc00205ed10) (0xc0030a4aa0) Stream added, broadcasting: 5 I0811 08:01:09.556646 6 log.go:172] (0xc00205ed10) Reply frame received for 5 I0811 08:01:09.626002 6 log.go:172] (0xc00205ed10) Data frame received for 5 I0811 08:01:09.626075 6 log.go:172] (0xc0030a4aa0) (5) Data frame handling I0811 08:01:09.626132 6 log.go:172] (0xc00205ed10) Data frame received for 3 I0811 08:01:09.626162 6 log.go:172] (0xc0030a4a00) (3) Data frame handling I0811 08:01:09.626197 6 log.go:172] (0xc0030a4a00) (3) Data frame sent I0811 08:01:09.626224 6 log.go:172] (0xc00205ed10) Data frame received for 3 I0811 08:01:09.626240 6 log.go:172] (0xc0030a4a00) (3) Data frame handling I0811 08:01:09.627825 6 log.go:172] (0xc00205ed10) Data frame received for 1 I0811 08:01:09.627865 6 log.go:172] (0xc002edf4a0) (1) Data frame handling I0811 08:01:09.627888 6 log.go:172] (0xc002edf4a0) (1) Data frame sent I0811 08:01:09.627901 6 log.go:172] (0xc00205ed10) (0xc002edf4a0) Stream removed, broadcasting: 1 I0811 08:01:09.627916 6 log.go:172] (0xc00205ed10) Go away received I0811 08:01:09.628089 6 log.go:172] (0xc00205ed10) (0xc002edf4a0) Stream removed, broadcasting: 1 I0811 08:01:09.628120 6 log.go:172] (0xc00205ed10) (0xc0030a4a00) Stream removed, broadcasting: 3 I0811 08:01:09.628147 6 log.go:172] (0xc00205ed10) (0xc0030a4aa0) Stream removed, broadcasting: 5 Aug 11 08:01:09.628: INFO: Exec stderr: "" Aug 11 08:01:09.628: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1766 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 08:01:09.628: INFO: >>> kubeConfig: /root/.kube/config I0811 08:01:09.661863 6 log.go:172] (0xc0015f5760) (0xc001fb0780) Create stream I0811 08:01:09.661927 6 log.go:172] (0xc0015f5760) (0xc001fb0780) Stream added, broadcasting: 1 I0811 08:01:09.668009 6 log.go:172] (0xc0015f5760) Reply frame received for 1 I0811 08:01:09.668067 6 log.go:172] (0xc0015f5760) (0xc00149d0e0) Create stream I0811 08:01:09.668083 6 log.go:172] (0xc0015f5760) (0xc00149d0e0) Stream added, broadcasting: 3 I0811 08:01:09.669146 6 log.go:172] (0xc0015f5760) Reply frame received for 3 I0811 08:01:09.669178 6 log.go:172] (0xc0015f5760) (0xc00149d220) Create stream I0811 08:01:09.669188 6 log.go:172] (0xc0015f5760) (0xc00149d220) Stream added, broadcasting: 5 I0811 08:01:09.670053 6 log.go:172] (0xc0015f5760) Reply frame received for 5 I0811 08:01:09.730975 6 log.go:172] (0xc0015f5760) Data frame received for 5 I0811 08:01:09.731034 6 log.go:172] (0xc00149d220) (5) Data frame handling I0811 08:01:09.731087 6 log.go:172] (0xc0015f5760) Data frame received for 3 I0811 08:01:09.731156 6 log.go:172] (0xc00149d0e0) (3) Data frame handling I0811 08:01:09.731203 6 log.go:172] (0xc00149d0e0) (3) Data frame sent I0811 08:01:09.731224 6 log.go:172] (0xc0015f5760) Data frame received for 3 I0811 08:01:09.731242 6 log.go:172] (0xc00149d0e0) (3) Data frame handling I0811 08:01:09.732643 6 log.go:172] (0xc0015f5760) Data frame received for 1 I0811 08:01:09.732663 6 log.go:172] (0xc001fb0780) (1) Data frame handling I0811 08:01:09.732678 6 log.go:172] (0xc001fb0780) (1) Data frame sent I0811 08:01:09.732692 6 log.go:172] (0xc0015f5760) (0xc001fb0780) Stream removed, broadcasting: 1 I0811 08:01:09.732871 6 log.go:172] (0xc0015f5760) (0xc001fb0780) Stream removed, broadcasting: 1 I0811 08:01:09.732932 6 log.go:172] (0xc0015f5760) (0xc00149d0e0) Stream removed, broadcasting: 3 I0811 08:01:09.732971 6 log.go:172] (0xc0015f5760) (0xc00149d220) Stream removed, broadcasting: 5 Aug 11 08:01:09.732: INFO: Exec stderr: "" Aug 11 08:01:09.733: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1766 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 08:01:09.733: INFO: >>> kubeConfig: /root/.kube/config I0811 08:01:09.733056 6 log.go:172] (0xc0015f5760) Go away received I0811 08:01:09.760001 6 log.go:172] (0xc002e80370) (0xc001fb0aa0) Create stream I0811 08:01:09.760022 6 log.go:172] (0xc002e80370) (0xc001fb0aa0) Stream added, broadcasting: 1 I0811 08:01:09.762729 6 log.go:172] (0xc002e80370) Reply frame received for 1 I0811 08:01:09.762835 6 log.go:172] (0xc002e80370) (0xc001fb0b40) Create stream I0811 08:01:09.762858 6 log.go:172] (0xc002e80370) (0xc001fb0b40) Stream added, broadcasting: 3 I0811 08:01:09.763880 6 log.go:172] (0xc002e80370) Reply frame received for 3 I0811 08:01:09.763928 6 log.go:172] (0xc002e80370) (0xc001fb0c80) Create stream I0811 08:01:09.763957 6 log.go:172] (0xc002e80370) (0xc001fb0c80) Stream added, broadcasting: 5 I0811 08:01:09.765054 6 log.go:172] (0xc002e80370) Reply frame received for 5 I0811 08:01:09.823185 6 log.go:172] (0xc002e80370) Data frame received for 5 I0811 08:01:09.823221 6 log.go:172] (0xc001fb0c80) (5) Data frame handling I0811 08:01:09.823246 6 log.go:172] (0xc002e80370) Data frame received for 3 I0811 08:01:09.823280 6 log.go:172] (0xc001fb0b40) (3) Data frame handling I0811 08:01:09.823321 6 log.go:172] (0xc001fb0b40) (3) Data frame sent I0811 08:01:09.823346 6 log.go:172] (0xc002e80370) Data frame received for 3 I0811 08:01:09.823360 6 log.go:172] (0xc001fb0b40) (3) Data frame handling I0811 08:01:09.824902 6 log.go:172] (0xc002e80370) Data frame received for 1 I0811 08:01:09.824931 6 log.go:172] (0xc001fb0aa0) (1) Data frame handling I0811 08:01:09.824955 6 log.go:172] (0xc001fb0aa0) (1) Data frame sent I0811 08:01:09.824975 6 log.go:172] (0xc002e80370) (0xc001fb0aa0) Stream removed, broadcasting: 1 I0811 08:01:09.825003 6 log.go:172] (0xc002e80370) Go away received I0811 08:01:09.825093 6 log.go:172] (0xc002e80370) (0xc001fb0aa0) Stream removed, broadcasting: 1 I0811 08:01:09.825114 6 log.go:172] (0xc002e80370) (0xc001fb0b40) Stream removed, broadcasting: 3 I0811 08:01:09.825124 6 log.go:172] (0xc002e80370) (0xc001fb0c80) Stream removed, broadcasting: 5 Aug 11 08:01:09.825: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 11 08:01:09.825: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1766 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 08:01:09.825: INFO: >>> kubeConfig: /root/.kube/config I0811 08:01:09.858176 6 log.go:172] (0xc002e81130) (0xc001fb0fa0) Create stream I0811 08:01:09.858201 6 log.go:172] (0xc002e81130) (0xc001fb0fa0) Stream added, broadcasting: 1 I0811 08:01:09.860617 6 log.go:172] (0xc002e81130) Reply frame received for 1 I0811 08:01:09.860661 6 log.go:172] (0xc002e81130) (0xc0030a4b40) Create stream I0811 08:01:09.860685 6 log.go:172] (0xc002e81130) (0xc0030a4b40) Stream added, broadcasting: 3 I0811 08:01:09.862138 6 log.go:172] (0xc002e81130) Reply frame received for 3 I0811 08:01:09.862186 6 log.go:172] (0xc002e81130) (0xc002edf540) Create stream I0811 08:01:09.862211 6 log.go:172] (0xc002e81130) (0xc002edf540) Stream added, broadcasting: 5 I0811 08:01:09.863290 6 log.go:172] (0xc002e81130) Reply frame received for 5 I0811 08:01:09.923536 6 log.go:172] (0xc002e81130) Data frame received for 5 I0811 08:01:09.923589 6 log.go:172] (0xc002edf540) (5) Data frame handling I0811 08:01:09.923624 6 log.go:172] (0xc002e81130) Data frame received for 3 I0811 08:01:09.923642 6 log.go:172] (0xc0030a4b40) (3) Data frame handling I0811 08:01:09.923666 6 log.go:172] (0xc0030a4b40) (3) Data frame sent I0811 08:01:09.923682 6 log.go:172] (0xc002e81130) Data frame received for 3 I0811 08:01:09.923698 6 log.go:172] (0xc0030a4b40) (3) Data frame handling I0811 08:01:09.924937 6 log.go:172] (0xc002e81130) Data frame received for 1 I0811 08:01:09.924950 6 log.go:172] (0xc001fb0fa0) (1) Data frame handling I0811 08:01:09.924956 6 log.go:172] (0xc001fb0fa0) (1) Data frame sent I0811 08:01:09.924963 6 log.go:172] (0xc002e81130) (0xc001fb0fa0) Stream removed, broadcasting: 1 I0811 08:01:09.925024 6 log.go:172] (0xc002e81130) Go away received I0811 08:01:09.925097 6 log.go:172] (0xc002e81130) (0xc001fb0fa0) Stream removed, broadcasting: 1 I0811 08:01:09.925108 6 log.go:172] (0xc002e81130) (0xc0030a4b40) Stream removed, broadcasting: 3 I0811 08:01:09.925118 6 log.go:172] (0xc002e81130) (0xc002edf540) Stream removed, broadcasting: 5 Aug 11 08:01:09.925: INFO: Exec stderr: "" Aug 11 08:01:09.925: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1766 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 08:01:09.925: INFO: >>> kubeConfig: /root/.kube/config I0811 08:01:09.954568 6 log.go:172] (0xc001a0b3f0) (0xc0030a4be0) Create stream I0811 08:01:09.954600 6 log.go:172] (0xc001a0b3f0) (0xc0030a4be0) Stream added, broadcasting: 1 I0811 08:01:09.957525 6 log.go:172] (0xc001a0b3f0) Reply frame received for 1 I0811 08:01:09.957562 6 log.go:172] (0xc001a0b3f0) (0xc0030a4c80) Create stream I0811 08:01:09.957576 6 log.go:172] (0xc001a0b3f0) (0xc0030a4c80) Stream added, broadcasting: 3 I0811 08:01:09.958669 6 log.go:172] (0xc001a0b3f0) Reply frame received for 3 I0811 08:01:09.958720 6 log.go:172] (0xc001a0b3f0) (0xc002edf5e0) Create stream I0811 08:01:09.958735 6 log.go:172] (0xc001a0b3f0) (0xc002edf5e0) Stream added, broadcasting: 5 I0811 08:01:09.959684 6 log.go:172] (0xc001a0b3f0) Reply frame received for 5 I0811 08:01:10.034097 6 log.go:172] (0xc001a0b3f0) Data frame received for 5 I0811 08:01:10.034142 6 log.go:172] (0xc002edf5e0) (5) Data frame handling I0811 08:01:10.034191 6 log.go:172] (0xc001a0b3f0) Data frame received for 3 I0811 08:01:10.034219 6 log.go:172] (0xc0030a4c80) (3) Data frame handling I0811 08:01:10.034270 6 log.go:172] (0xc0030a4c80) (3) Data frame sent I0811 08:01:10.034291 6 log.go:172] (0xc001a0b3f0) Data frame received for 3 I0811 08:01:10.034321 6 log.go:172] (0xc0030a4c80) (3) Data frame handling I0811 08:01:10.035801 6 log.go:172] (0xc001a0b3f0) Data frame received for 1 I0811 08:01:10.035822 6 log.go:172] (0xc0030a4be0) (1) Data frame handling I0811 08:01:10.035844 6 log.go:172] (0xc0030a4be0) (1) Data frame sent I0811 08:01:10.035876 6 log.go:172] (0xc001a0b3f0) (0xc0030a4be0) Stream removed, broadcasting: 1 I0811 08:01:10.035988 6 log.go:172] (0xc001a0b3f0) (0xc0030a4be0) Stream removed, broadcasting: 1 I0811 08:01:10.036008 6 log.go:172] (0xc001a0b3f0) (0xc0030a4c80) Stream removed, broadcasting: 3 I0811 08:01:10.036158 6 log.go:172] (0xc001a0b3f0) (0xc002edf5e0) Stream removed, broadcasting: 5 Aug 11 08:01:10.036: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true I0811 08:01:10.036236 6 log.go:172] (0xc001a0b3f0) Go away received Aug 11 08:01:10.036: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1766 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 08:01:10.036: INFO: >>> kubeConfig: /root/.kube/config I0811 08:01:10.066921 6 log.go:172] (0xc0024ec210) (0xc001fb1360) Create stream I0811 08:01:10.066954 6 log.go:172] (0xc0024ec210) (0xc001fb1360) Stream added, broadcasting: 1 I0811 08:01:10.069760 6 log.go:172] (0xc0024ec210) Reply frame received for 1 I0811 08:01:10.069798 6 log.go:172] (0xc0024ec210) (0xc0032c66e0) Create stream I0811 08:01:10.069811 6 log.go:172] (0xc0024ec210) (0xc0032c66e0) Stream added, broadcasting: 3 I0811 08:01:10.070714 6 log.go:172] (0xc0024ec210) Reply frame received for 3 I0811 08:01:10.070769 6 log.go:172] (0xc0024ec210) (0xc001fb1400) Create stream I0811 08:01:10.070789 6 log.go:172] (0xc0024ec210) (0xc001fb1400) Stream added, broadcasting: 5 I0811 08:01:10.071588 6 log.go:172] (0xc0024ec210) Reply frame received for 5 I0811 08:01:10.141131 6 log.go:172] (0xc0024ec210) Data frame received for 5 I0811 08:01:10.141171 6 log.go:172] (0xc001fb1400) (5) Data frame handling I0811 08:01:10.141205 6 log.go:172] (0xc0024ec210) Data frame received for 3 I0811 08:01:10.141247 6 log.go:172] (0xc0032c66e0) (3) Data frame handling I0811 08:01:10.141274 6 log.go:172] (0xc0032c66e0) (3) Data frame sent I0811 08:01:10.141298 6 log.go:172] (0xc0024ec210) Data frame received for 3 I0811 08:01:10.141309 6 log.go:172] (0xc0032c66e0) (3) Data frame handling I0811 08:01:10.143060 6 log.go:172] (0xc0024ec210) Data frame received for 1 I0811 08:01:10.143085 6 log.go:172] (0xc001fb1360) (1) Data frame handling I0811 08:01:10.143103 6 log.go:172] (0xc001fb1360) (1) Data frame sent I0811 08:01:10.143123 6 log.go:172] (0xc0024ec210) (0xc001fb1360) Stream removed, broadcasting: 1 I0811 08:01:10.143273 6 log.go:172] (0xc0024ec210) (0xc001fb1360) Stream removed, broadcasting: 1 I0811 08:01:10.143296 6 log.go:172] (0xc0024ec210) (0xc0032c66e0) Stream removed, broadcasting: 3 I0811 08:01:10.143482 6 log.go:172] (0xc0024ec210) (0xc001fb1400) Stream removed, broadcasting: 5 Aug 11 08:01:10.143: INFO: Exec stderr: "" Aug 11 08:01:10.143: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1766 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 08:01:10.143: INFO: >>> kubeConfig: /root/.kube/config I0811 08:01:10.144954 6 log.go:172] (0xc0024ec210) Go away received I0811 08:01:10.177709 6 log.go:172] (0xc0026c4210) (0xc0030a4fa0) Create stream I0811 08:01:10.177736 6 log.go:172] (0xc0026c4210) (0xc0030a4fa0) Stream added, broadcasting: 1 I0811 08:01:10.181545 6 log.go:172] (0xc0026c4210) Reply frame received for 1 I0811 08:01:10.181576 6 log.go:172] (0xc0026c4210) (0xc00149d2c0) Create stream I0811 08:01:10.181592 6 log.go:172] (0xc0026c4210) (0xc00149d2c0) Stream added, broadcasting: 3 I0811 08:01:10.182677 6 log.go:172] (0xc0026c4210) Reply frame received for 3 I0811 08:01:10.182744 6 log.go:172] (0xc0026c4210) (0xc0032c6780) Create stream I0811 08:01:10.182769 6 log.go:172] (0xc0026c4210) (0xc0032c6780) Stream added, broadcasting: 5 I0811 08:01:10.183885 6 log.go:172] (0xc0026c4210) Reply frame received for 5 I0811 08:01:10.262395 6 log.go:172] (0xc0026c4210) Data frame received for 5 I0811 08:01:10.262434 6 log.go:172] (0xc0032c6780) (5) Data frame handling I0811 08:01:10.262462 6 log.go:172] (0xc0026c4210) Data frame received for 3 I0811 08:01:10.262489 6 log.go:172] (0xc00149d2c0) (3) Data frame handling I0811 08:01:10.262503 6 log.go:172] (0xc00149d2c0) (3) Data frame sent I0811 08:01:10.262510 6 log.go:172] (0xc0026c4210) Data frame received for 3 I0811 08:01:10.262516 6 log.go:172] (0xc00149d2c0) (3) Data frame handling I0811 08:01:10.263736 6 log.go:172] (0xc0026c4210) Data frame received for 1 I0811 08:01:10.263750 6 log.go:172] (0xc0030a4fa0) (1) Data frame handling I0811 08:01:10.263766 6 log.go:172] (0xc0030a4fa0) (1) Data frame sent I0811 08:01:10.263783 6 log.go:172] (0xc0026c4210) (0xc0030a4fa0) Stream removed, broadcasting: 1 I0811 08:01:10.263797 6 log.go:172] (0xc0026c4210) Go away received I0811 08:01:10.263895 6 log.go:172] (0xc0026c4210) (0xc0030a4fa0) Stream removed, broadcasting: 1 I0811 08:01:10.263918 6 log.go:172] (0xc0026c4210) (0xc00149d2c0) Stream removed, broadcasting: 3 I0811 08:01:10.263928 6 log.go:172] (0xc0026c4210) (0xc0032c6780) Stream removed, broadcasting: 5 Aug 11 08:01:10.263: INFO: Exec stderr: "" Aug 11 08:01:10.263: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1766 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 11 08:01:10.263: INFO: >>> kubeConfig: /root/.kube/config I0811 08:01:10.294518 6 log.go:172] (0xc0024ecc60) (0xc001fb17c0) Create stream I0811 08:01:10.294554 6 log.go:172] (0xc0024ecc60) (0xc001fb17c0) Stream added, broadcasting: 1 I0811 08:01:10.297970 6 log.go:172] (0xc0024ecc60) Reply frame received for 1 I0811 08:01:10.298040 6 log.go:172] (0xc0024ecc60) (0xc0032c6820) Create stream I0811 08:01:10.298131 6 log.go:172] (0xc0024ecc60) (0xc0032c6820) Stream added, broadcasting: 3 I0811 08:01:10.300109 6 log.go:172] (0xc0024ecc60) Reply frame received for 3 I0811 08:01:10.300164 6 log.go:172] (0xc0024ecc60) (0xc002edf680) Create stream I0811 08:01:10.300185 6 log.go:172] (0xc0024ecc60) (0xc002edf680) Stream added, broadcasting: 5 I0811 08:01:10.301056 6 log.go:172] (0xc0024ecc60) Reply frame received for 5 I0811 08:01:10.363319 6 log.go:172] (0xc0024ecc60) Data frame received for 5 I0811 08:01:10.363356 6 log.go:172] (0xc002edf680) (5) Data frame handling I0811 08:01:10.363390 6 log.go:172] (0xc0024ecc60) Data frame received for 3 I0811 08:01:10.363425 6 log.go:172] (0xc0032c6820) (3) Data frame handling I0811 08:01:10.363445 6 log.go:172] (0xc0032c6820) (3) Data frame sent I0811 08:01:10.363460 6 log.go:172] (0xc0024ecc60) Data frame received for 3 I0811 08:01:10.363472 6 log.go:172] (0xc0032c6820) (3) Data frame handling I0811 08:01:10.364550 6 log.go:172] (0xc0024ecc60) Data frame received for 1 I0811 08:01:10.364567 6 log.go:172] (0xc001fb17c0) (1) Data frame handling I0811 08:01:10.364587 6 log.go:172] (0xc001fb17c0) (1) Data frame sent I0811 08:01:10.364606 6 log.go:172] (0xc0024ecc60) (0xc001fb17c0) Stream removed, broadcasting: 1 I0811 08:01:10.364689 6 log.go:172] (0xc0024ecc60) (0xc001fb17c0) Stream removed, broadcasting: 1 I0811 08:01:10.364706 6 log.go:172] (0xc0024ecc60) (0xc0032c6820) Stream removed, broadcasting: 3 I0811 08:01:10.364797 6 log.go:172] (0xc0024ecc60) (0xc002edf680) Stream removed, broadcasting: 5 Aug 11 08:01:10.364: INFO: Exec stderr: "" Aug 11 08:01:10.364: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1766 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0811 08:01:10.364907 6 log.go:172] (0xc0024ecc60) Go away received Aug 11 08:01:10.364: INFO: >>> kubeConfig: /root/.kube/config I0811 08:01:10.396655 6 log.go:172] (0xc00322d340) (0xc00149d720) Create stream I0811 08:01:10.396693 6 log.go:172] (0xc00322d340) (0xc00149d720) Stream added, broadcasting: 1 I0811 08:01:10.399290 6 log.go:172] (0xc00322d340) Reply frame received for 1 I0811 08:01:10.399352 6 log.go:172] (0xc00322d340) (0xc0030a5040) Create stream I0811 08:01:10.399369 6 log.go:172] (0xc00322d340) (0xc0030a5040) Stream added, broadcasting: 3 I0811 08:01:10.400385 6 log.go:172] (0xc00322d340) Reply frame received for 3 I0811 08:01:10.400431 6 log.go:172] (0xc00322d340) (0xc00149d7c0) Create stream I0811 08:01:10.400446 6 log.go:172] (0xc00322d340) (0xc00149d7c0) Stream added, broadcasting: 5 I0811 08:01:10.401573 6 log.go:172] (0xc00322d340) Reply frame received for 5 I0811 08:01:10.479072 6 log.go:172] (0xc00322d340) Data frame received for 3 I0811 08:01:10.479119 6 log.go:172] (0xc0030a5040) (3) Data frame handling I0811 08:01:10.479153 6 log.go:172] (0xc0030a5040) (3) Data frame sent I0811 08:01:10.479169 6 log.go:172] (0xc00322d340) Data frame received for 3 I0811 08:01:10.479182 6 log.go:172] (0xc0030a5040) (3) Data frame handling I0811 08:01:10.479206 6 log.go:172] (0xc00322d340) Data frame received for 5 I0811 08:01:10.479219 6 log.go:172] (0xc00149d7c0) (5) Data frame handling I0811 08:01:10.480954 6 log.go:172] (0xc00322d340) Data frame received for 1 I0811 08:01:10.480982 6 log.go:172] (0xc00149d720) (1) Data frame handling I0811 08:01:10.481010 6 log.go:172] (0xc00149d720) (1) Data frame sent I0811 08:01:10.481030 6 log.go:172] (0xc00322d340) (0xc00149d720) Stream removed, broadcasting: 1 I0811 08:01:10.481049 6 log.go:172] (0xc00322d340) Go away received I0811 08:01:10.481242 6 log.go:172] (0xc00322d340) (0xc00149d720) Stream removed, broadcasting: 1 I0811 08:01:10.481271 6 log.go:172] (0xc00322d340) (0xc0030a5040) Stream removed, broadcasting: 3 I0811 08:01:10.481286 6 log.go:172] (0xc00322d340) (0xc00149d7c0) Stream removed, broadcasting: 5 Aug 11 08:01:10.481: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 11 08:01:10.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1766" for this suite. Aug 11 08:02:00.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 11 08:02:00.574: INFO: namespace e2e-kubelet-etc-hosts-1766 deletion completed in 50.088835395s • [SLOW TEST:61.402 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 11 08:02:00.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 11 08:02:00.655: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-622341fd-b97f-4199-a7f7-bc5a7992320c
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:02:06.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9703" for this suite.
Aug 11 08:02:12.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:02:12.999: INFO: namespace secrets-9703 deletion completed in 6.085138422s

• [SLOW TEST:6.153 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:02:13.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 08:02:13.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:02:19.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9223" for this suite.
Aug 11 08:03:09.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:03:09.249: INFO: namespace pods-9223 deletion completed in 50.131463033s

• [SLOW TEST:56.249 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:03:09.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b2496ce0-b9ec-44e4-93fb-20034092187f
STEP: Creating a pod to test consume secrets
Aug 11 08:03:09.324: INFO: Waiting up to 5m0s for pod "pod-secrets-07760077-55f3-414d-b44c-e79ed514b090" in namespace "secrets-1510" to be "success or failure"
Aug 11 08:03:09.341: INFO: Pod "pod-secrets-07760077-55f3-414d-b44c-e79ed514b090": Phase="Pending", Reason="", readiness=false. Elapsed: 16.592196ms
Aug 11 08:03:11.346: INFO: Pod "pod-secrets-07760077-55f3-414d-b44c-e79ed514b090": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021511963s
Aug 11 08:03:13.349: INFO: Pod "pod-secrets-07760077-55f3-414d-b44c-e79ed514b090": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024931733s
STEP: Saw pod success
Aug 11 08:03:13.349: INFO: Pod "pod-secrets-07760077-55f3-414d-b44c-e79ed514b090" satisfied condition "success or failure"
Aug 11 08:03:13.351: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-07760077-55f3-414d-b44c-e79ed514b090 container secret-volume-test: 
STEP: delete the pod
Aug 11 08:03:13.425: INFO: Waiting for pod pod-secrets-07760077-55f3-414d-b44c-e79ed514b090 to disappear
Aug 11 08:03:13.439: INFO: Pod pod-secrets-07760077-55f3-414d-b44c-e79ed514b090 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:03:13.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1510" for this suite.
Aug 11 08:03:19.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:03:19.539: INFO: namespace secrets-1510 deletion completed in 6.095843185s

• [SLOW TEST:10.290 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:03:19.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-tdtnf in namespace proxy-6759
I0811 08:03:19.701465       6 runners.go:180] Created replication controller with name: proxy-service-tdtnf, namespace: proxy-6759, replica count: 1
I0811 08:03:20.751925       6 runners.go:180] proxy-service-tdtnf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 08:03:21.752099       6 runners.go:180] proxy-service-tdtnf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 08:03:22.752297       6 runners.go:180] proxy-service-tdtnf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 08:03:23.752512       6 runners.go:180] proxy-service-tdtnf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0811 08:03:24.752713       6 runners.go:180] proxy-service-tdtnf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0811 08:03:25.753010       6 runners.go:180] proxy-service-tdtnf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0811 08:03:26.753298       6 runners.go:180] proxy-service-tdtnf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0811 08:03:27.753542       6 runners.go:180] proxy-service-tdtnf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0811 08:03:28.753740       6 runners.go:180] proxy-service-tdtnf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0811 08:03:29.753931       6 runners.go:180] proxy-service-tdtnf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0811 08:03:30.754165       6 runners.go:180] proxy-service-tdtnf Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 11 08:03:30.758: INFO: setup took 11.141851692s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 11 08:03:30.763: INFO: (0) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 5.782861ms)
Aug 11 08:03:30.793: INFO: (0) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:1080/proxy/: test<... (200; 34.83882ms)
Aug 11 08:03:30.797: INFO: (0) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname1/proxy/: foo (200; 38.696146ms)
Aug 11 08:03:30.797: INFO: (0) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname2/proxy/: bar (200; 38.76168ms)
Aug 11 08:03:30.797: INFO: (0) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 38.837773ms)
Aug 11 08:03:30.797: INFO: (0) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname2/proxy/: bar (200; 38.764589ms)
Aug 11 08:03:30.797: INFO: (0) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 38.945673ms)
Aug 11 08:03:30.797: INFO: (0) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname1/proxy/: foo (200; 38.872442ms)
Aug 11 08:03:30.797: INFO: (0) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 38.900028ms)
Aug 11 08:03:30.797: INFO: (0) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x/proxy/: test (200; 39.11766ms)
Aug 11 08:03:30.801: INFO: (0) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 43.501417ms)
Aug 11 08:03:30.805: INFO: (0) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 46.790146ms)
Aug 11 08:03:30.805: INFO: (0) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname2/proxy/: tls qux (200; 46.814557ms)
Aug 11 08:03:30.805: INFO: (0) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 47.311628ms)
Aug 11 08:03:30.807: INFO: (0) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname1/proxy/: tls baz (200; 49.354332ms)
Aug 11 08:03:30.807: INFO: (0) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test<... (200; 4.513728ms)
Aug 11 08:03:30.813: INFO: (1) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x/proxy/: test (200; 5.334586ms)
Aug 11 08:03:30.813: INFO: (1) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 5.424655ms)
Aug 11 08:03:30.813: INFO: (1) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 5.387232ms)
Aug 11 08:03:30.813: INFO: (1) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 5.527405ms)
Aug 11 08:03:30.813: INFO: (1) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 5.393856ms)
Aug 11 08:03:30.813: INFO: (1) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: ... (200; 5.572919ms)
Aug 11 08:03:30.813: INFO: (1) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname2/proxy/: bar (200; 6.215288ms)
Aug 11 08:03:30.813: INFO: (1) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname2/proxy/: bar (200; 6.149177ms)
Aug 11 08:03:30.814: INFO: (1) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname2/proxy/: tls qux (200; 6.498131ms)
Aug 11 08:03:30.814: INFO: (1) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname1/proxy/: tls baz (200; 6.441641ms)
Aug 11 08:03:30.814: INFO: (1) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname1/proxy/: foo (200; 6.515596ms)
Aug 11 08:03:30.818: INFO: (2) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname2/proxy/: bar (200; 4.497515ms)
Aug 11 08:03:30.818: INFO: (2) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 4.51335ms)
Aug 11 08:03:30.818: INFO: (2) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 4.525976ms)
Aug 11 08:03:30.819: INFO: (2) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 4.603824ms)
Aug 11 08:03:30.819: INFO: (2) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname2/proxy/: bar (200; 4.664616ms)
Aug 11 08:03:30.819: INFO: (2) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname1/proxy/: tls baz (200; 4.682509ms)
Aug 11 08:03:30.819: INFO: (2) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x/proxy/: test (200; 4.660977ms)
Aug 11 08:03:30.819: INFO: (2) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname2/proxy/: tls qux (200; 4.833874ms)
Aug 11 08:03:30.819: INFO: (2) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname1/proxy/: foo (200; 4.845661ms)
Aug 11 08:03:30.819: INFO: (2) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname1/proxy/: foo (200; 4.934541ms)
Aug 11 08:03:30.819: INFO: (2) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 4.993342ms)
Aug 11 08:03:30.819: INFO: (2) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 5.120575ms)
Aug 11 08:03:30.819: INFO: (2) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 5.020506ms)
Aug 11 08:03:30.819: INFO: (2) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 5.143443ms)
Aug 11 08:03:30.819: INFO: (2) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test<... (200; 5.088363ms)
Aug 11 08:03:30.826: INFO: (3) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname2/proxy/: bar (200; 6.470534ms)
Aug 11 08:03:30.826: INFO: (3) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 6.50004ms)
Aug 11 08:03:30.826: INFO: (3) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 6.678582ms)
Aug 11 08:03:30.826: INFO: (3) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 6.667115ms)
Aug 11 08:03:30.826: INFO: (3) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:1080/proxy/: test<... (200; 6.745532ms)
Aug 11 08:03:30.826: INFO: (3) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test (200; 7.120864ms)
Aug 11 08:03:30.826: INFO: (3) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname2/proxy/: tls qux (200; 7.304451ms)
Aug 11 08:03:30.826: INFO: (3) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 7.281305ms)
Aug 11 08:03:30.827: INFO: (3) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname1/proxy/: foo (200; 7.504284ms)
Aug 11 08:03:30.827: INFO: (3) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname1/proxy/: foo (200; 7.737418ms)
Aug 11 08:03:30.830: INFO: (4) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 3.353634ms)
Aug 11 08:03:30.830: INFO: (4) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 3.349752ms)
Aug 11 08:03:30.831: INFO: (4) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 3.963428ms)
Aug 11 08:03:30.831: INFO: (4) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 4.083489ms)
Aug 11 08:03:30.831: INFO: (4) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test<... (200; 4.231168ms)
Aug 11 08:03:30.831: INFO: (4) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname2/proxy/: bar (200; 4.214027ms)
Aug 11 08:03:30.831: INFO: (4) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname1/proxy/: tls baz (200; 4.251237ms)
Aug 11 08:03:30.831: INFO: (4) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x/proxy/: test (200; 4.246887ms)
Aug 11 08:03:30.831: INFO: (4) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname2/proxy/: bar (200; 4.356056ms)
Aug 11 08:03:30.831: INFO: (4) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 4.388103ms)
Aug 11 08:03:30.832: INFO: (4) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 4.721402ms)
Aug 11 08:03:30.832: INFO: (4) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 4.812826ms)
Aug 11 08:03:30.832: INFO: (4) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname2/proxy/: tls qux (200; 4.79072ms)
Aug 11 08:03:30.832: INFO: (4) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname1/proxy/: foo (200; 4.838265ms)
Aug 11 08:03:30.832: INFO: (4) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname1/proxy/: foo (200; 4.785636ms)
Aug 11 08:03:30.834: INFO: (5) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 2.322094ms)
Aug 11 08:03:30.835: INFO: (5) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 2.758562ms)
Aug 11 08:03:30.835: INFO: (5) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 2.883031ms)
Aug 11 08:03:30.835: INFO: (5) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 3.006249ms)
Aug 11 08:03:30.835: INFO: (5) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x/proxy/: test (200; 2.99177ms)
Aug 11 08:03:30.835: INFO: (5) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 3.093937ms)
Aug 11 08:03:30.835: INFO: (5) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 3.182133ms)
Aug 11 08:03:30.835: INFO: (5) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:1080/proxy/: test<... (200; 3.162745ms)
Aug 11 08:03:30.835: INFO: (5) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: ... (200; 2.113001ms)
Aug 11 08:03:30.840: INFO: (6) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 3.562612ms)
Aug 11 08:03:30.840: INFO: (6) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname2/proxy/: bar (200; 3.665547ms)
Aug 11 08:03:30.840: INFO: (6) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname2/proxy/: bar (200; 3.881775ms)
Aug 11 08:03:30.840: INFO: (6) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 3.938402ms)
Aug 11 08:03:30.841: INFO: (6) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname1/proxy/: foo (200; 4.17781ms)
Aug 11 08:03:30.841: INFO: (6) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:1080/proxy/: test<... (200; 4.161302ms)
Aug 11 08:03:30.841: INFO: (6) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 4.545023ms)
Aug 11 08:03:30.841: INFO: (6) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test (200; 4.611117ms)
Aug 11 08:03:30.841: INFO: (6) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 4.71223ms)
Aug 11 08:03:30.841: INFO: (6) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname1/proxy/: foo (200; 4.659981ms)
Aug 11 08:03:30.841: INFO: (6) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname2/proxy/: tls qux (200; 4.720568ms)
Aug 11 08:03:30.843: INFO: (7) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x/proxy/: test (200; 1.84379ms)
Aug 11 08:03:30.845: INFO: (7) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname1/proxy/: foo (200; 3.750764ms)
Aug 11 08:03:30.845: INFO: (7) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 4.106622ms)
Aug 11 08:03:30.845: INFO: (7) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 4.183167ms)
Aug 11 08:03:30.845: INFO: (7) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname2/proxy/: bar (200; 4.192582ms)
Aug 11 08:03:30.845: INFO: (7) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname2/proxy/: tls qux (200; 4.261854ms)
Aug 11 08:03:30.845: INFO: (7) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:1080/proxy/: test<... (200; 4.267548ms)
Aug 11 08:03:30.845: INFO: (7) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 4.269011ms)
Aug 11 08:03:30.846: INFO: (7) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 4.700489ms)
Aug 11 08:03:30.846: INFO: (7) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname2/proxy/: bar (200; 4.683736ms)
Aug 11 08:03:30.846: INFO: (7) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 4.732436ms)
Aug 11 08:03:30.846: INFO: (7) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 4.756985ms)
Aug 11 08:03:30.846: INFO: (7) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test<... (200; 2.847243ms)
Aug 11 08:03:30.849: INFO: (8) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 2.955207ms)
Aug 11 08:03:30.849: INFO: (8) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 3.039847ms)
Aug 11 08:03:30.849: INFO: (8) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 3.061952ms)
Aug 11 08:03:30.850: INFO: (8) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 3.803447ms)
Aug 11 08:03:30.850: INFO: (8) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test (200; 4.335257ms)
Aug 11 08:03:30.851: INFO: (8) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname1/proxy/: foo (200; 4.309627ms)
Aug 11 08:03:30.851: INFO: (8) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 4.291211ms)
Aug 11 08:03:30.851: INFO: (8) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname2/proxy/: bar (200; 4.3251ms)
Aug 11 08:03:30.851: INFO: (8) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname2/proxy/: bar (200; 4.322879ms)
Aug 11 08:03:30.851: INFO: (8) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname1/proxy/: foo (200; 4.355692ms)
Aug 11 08:03:30.851: INFO: (8) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname1/proxy/: tls baz (200; 4.29892ms)
Aug 11 08:03:30.851: INFO: (8) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname2/proxy/: tls qux (200; 4.418026ms)
Aug 11 08:03:30.854: INFO: (9) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 2.843579ms)
Aug 11 08:03:30.854: INFO: (9) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 3.447188ms)
Aug 11 08:03:30.854: INFO: (9) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:1080/proxy/: test<... (200; 3.542832ms)
Aug 11 08:03:30.854: INFO: (9) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 3.590044ms)
Aug 11 08:03:30.854: INFO: (9) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 3.598717ms)
Aug 11 08:03:30.854: INFO: (9) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname2/proxy/: tls qux (200; 3.591857ms)
Aug 11 08:03:30.855: INFO: (9) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname2/proxy/: bar (200; 3.987167ms)
Aug 11 08:03:30.855: INFO: (9) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname2/proxy/: bar (200; 4.005148ms)
Aug 11 08:03:30.855: INFO: (9) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x/proxy/: test (200; 4.00295ms)
Aug 11 08:03:30.855: INFO: (9) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname1/proxy/: tls baz (200; 4.071502ms)
Aug 11 08:03:30.855: INFO: (9) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname1/proxy/: foo (200; 4.093282ms)
Aug 11 08:03:30.855: INFO: (9) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 4.209038ms)
Aug 11 08:03:30.855: INFO: (9) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname1/proxy/: foo (200; 4.157137ms)
Aug 11 08:03:30.855: INFO: (9) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 4.178717ms)
Aug 11 08:03:30.855: INFO: (9) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 4.324447ms)
Aug 11 08:03:30.855: INFO: (9) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test (200; 3.74018ms)
Aug 11 08:03:30.860: INFO: (10) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname1/proxy/: foo (200; 4.115384ms)
Aug 11 08:03:30.860: INFO: (10) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname1/proxy/: foo (200; 4.147368ms)
Aug 11 08:03:30.860: INFO: (10) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 4.209257ms)
Aug 11 08:03:30.860: INFO: (10) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 4.133234ms)
Aug 11 08:03:30.860: INFO: (10) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:1080/proxy/: test<... (200; 4.238828ms)
Aug 11 08:03:30.860: INFO: (10) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname2/proxy/: bar (200; 4.318776ms)
Aug 11 08:03:30.860: INFO: (10) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 4.318886ms)
Aug 11 08:03:30.860: INFO: (10) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test<... (200; 13.897925ms)
Aug 11 08:03:30.874: INFO: (11) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 14.024509ms)
Aug 11 08:03:30.874: INFO: (11) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 14.047042ms)
Aug 11 08:03:30.874: INFO: (11) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 14.00574ms)
Aug 11 08:03:30.875: INFO: (11) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 14.927965ms)
Aug 11 08:03:30.875: INFO: (11) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 14.991452ms)
Aug 11 08:03:30.875: INFO: (11) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test (200; 15.057572ms)
Aug 11 08:03:30.875: INFO: (11) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname2/proxy/: tls qux (200; 15.087089ms)
Aug 11 08:03:30.875: INFO: (11) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname1/proxy/: tls baz (200; 15.192103ms)
Aug 11 08:03:30.875: INFO: (11) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname1/proxy/: foo (200; 15.108471ms)
Aug 11 08:03:30.875: INFO: (11) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname2/proxy/: bar (200; 15.34847ms)
Aug 11 08:03:30.878: INFO: (12) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x/proxy/: test (200; 2.898255ms)
Aug 11 08:03:30.878: INFO: (12) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 3.017516ms)
Aug 11 08:03:30.879: INFO: (12) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 3.600839ms)
Aug 11 08:03:30.879: INFO: (12) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 3.699558ms)
Aug 11 08:03:30.879: INFO: (12) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 3.851525ms)
Aug 11 08:03:30.880: INFO: (12) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 4.155339ms)
Aug 11 08:03:30.880: INFO: (12) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname2/proxy/: bar (200; 4.169049ms)
Aug 11 08:03:30.880: INFO: (12) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 4.194501ms)
Aug 11 08:03:30.880: INFO: (12) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test<... (200; 5.176609ms)
Aug 11 08:03:30.881: INFO: (12) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname1/proxy/: foo (200; 5.255999ms)
Aug 11 08:03:30.881: INFO: (12) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname1/proxy/: tls baz (200; 5.858729ms)
Aug 11 08:03:30.885: INFO: (13) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 3.504552ms)
Aug 11 08:03:30.885: INFO: (13) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 3.587575ms)
Aug 11 08:03:30.885: INFO: (13) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:1080/proxy/: test<... (200; 3.614601ms)
Aug 11 08:03:30.885: INFO: (13) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x/proxy/: test (200; 3.71542ms)
Aug 11 08:03:30.885: INFO: (13) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 3.631432ms)
Aug 11 08:03:30.886: INFO: (13) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname2/proxy/: bar (200; 4.873994ms)
Aug 11 08:03:30.886: INFO: (13) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname1/proxy/: foo (200; 4.827025ms)
Aug 11 08:03:30.887: INFO: (13) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname2/proxy/: bar (200; 5.038662ms)
Aug 11 08:03:30.887: INFO: (13) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 5.176493ms)
Aug 11 08:03:30.887: INFO: (13) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 5.29994ms)
Aug 11 08:03:30.887: INFO: (13) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 5.339996ms)
Aug 11 08:03:30.887: INFO: (13) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname1/proxy/: foo (200; 5.450557ms)
Aug 11 08:03:30.887: INFO: (13) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 5.424671ms)
Aug 11 08:03:30.887: INFO: (13) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: ... (200; 3.408258ms)
Aug 11 08:03:30.891: INFO: (14) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 3.395646ms)
Aug 11 08:03:30.891: INFO: (14) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 3.797026ms)
Aug 11 08:03:30.891: INFO: (14) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 4.008047ms)
Aug 11 08:03:30.891: INFO: (14) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:1080/proxy/: test<... (200; 4.14405ms)
Aug 11 08:03:30.892: INFO: (14) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test (200; 4.241796ms)
Aug 11 08:03:30.892: INFO: (14) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 4.362369ms)
Aug 11 08:03:30.892: INFO: (14) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname1/proxy/: tls baz (200; 4.582448ms)
Aug 11 08:03:30.892: INFO: (14) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname2/proxy/: bar (200; 4.627202ms)
Aug 11 08:03:30.892: INFO: (14) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname1/proxy/: foo (200; 4.635016ms)
Aug 11 08:03:30.892: INFO: (14) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname2/proxy/: tls qux (200; 4.686015ms)
Aug 11 08:03:30.892: INFO: (14) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname1/proxy/: foo (200; 4.736596ms)
Aug 11 08:03:30.892: INFO: (14) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname2/proxy/: bar (200; 4.719282ms)
Aug 11 08:03:30.895: INFO: (15) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 2.943585ms)
Aug 11 08:03:30.896: INFO: (15) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 3.820882ms)
Aug 11 08:03:30.896: INFO: (15) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x/proxy/: test (200; 3.841538ms)
Aug 11 08:03:30.896: INFO: (15) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 3.871062ms)
Aug 11 08:03:30.896: INFO: (15) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 3.873404ms)
Aug 11 08:03:30.896: INFO: (15) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname2/proxy/: bar (200; 4.004028ms)
Aug 11 08:03:30.896: INFO: (15) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname1/proxy/: tls baz (200; 4.024371ms)
Aug 11 08:03:30.896: INFO: (15) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:1080/proxy/: test<... (200; 4.074105ms)
Aug 11 08:03:30.897: INFO: (15) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 4.492725ms)
Aug 11 08:03:30.897: INFO: (15) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname1/proxy/: foo (200; 4.460007ms)
Aug 11 08:03:30.897: INFO: (15) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname2/proxy/: bar (200; 4.455453ms)
Aug 11 08:03:30.897: INFO: (15) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 4.46986ms)
Aug 11 08:03:30.897: INFO: (15) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test<... (200; 1.866311ms)
Aug 11 08:03:30.900: INFO: (16) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname1/proxy/: foo (200; 3.569891ms)
Aug 11 08:03:30.901: INFO: (16) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname1/proxy/: foo (200; 3.606037ms)
Aug 11 08:03:30.901: INFO: (16) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname2/proxy/: bar (200; 3.665007ms)
Aug 11 08:03:30.901: INFO: (16) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname1/proxy/: tls baz (200; 3.737017ms)
Aug 11 08:03:30.901: INFO: (16) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname2/proxy/: tls qux (200; 3.748938ms)
Aug 11 08:03:30.901: INFO: (16) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 3.840815ms)
Aug 11 08:03:30.901: INFO: (16) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 4.207438ms)
Aug 11 08:03:30.901: INFO: (16) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 4.188967ms)
Aug 11 08:03:30.901: INFO: (16) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 4.207756ms)
Aug 11 08:03:30.901: INFO: (16) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 4.273173ms)
Aug 11 08:03:30.901: INFO: (16) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test (200; 4.252711ms)
Aug 11 08:03:30.901: INFO: (16) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname2/proxy/: bar (200; 4.260274ms)
Aug 11 08:03:30.901: INFO: (16) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 4.271703ms)
Aug 11 08:03:30.901: INFO: (16) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 4.329743ms)
Aug 11 08:03:30.905: INFO: (17) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 3.846274ms)
Aug 11 08:03:30.905: INFO: (17) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 3.835053ms)
Aug 11 08:03:30.905: INFO: (17) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 3.942324ms)
Aug 11 08:03:30.905: INFO: (17) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x/proxy/: test (200; 3.94757ms)
Aug 11 08:03:30.905: INFO: (17) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:460/proxy/: tls baz (200; 3.973112ms)
Aug 11 08:03:30.905: INFO: (17) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 3.978994ms)
Aug 11 08:03:30.905: INFO: (17) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 4.023435ms)
Aug 11 08:03:30.905: INFO: (17) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 4.003309ms)
Aug 11 08:03:30.905: INFO: (17) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:1080/proxy/: test<... (200; 3.994154ms)
Aug 11 08:03:30.905: INFO: (17) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test<... (200; 4.541436ms)
Aug 11 08:03:30.913: INFO: (18) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 4.700863ms)
Aug 11 08:03:30.913: INFO: (18) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x/proxy/: test (200; 4.613425ms)
Aug 11 08:03:30.913: INFO: (18) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 4.701915ms)
Aug 11 08:03:30.913: INFO: (18) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:462/proxy/: tls qux (200; 4.803542ms)
Aug 11 08:03:30.914: INFO: (18) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 4.87707ms)
Aug 11 08:03:30.914: INFO: (18) /api/v1/namespaces/proxy-6759/pods/https:proxy-service-tdtnf-zhs7x:443/proxy/: test (200; 6.093761ms)
Aug 11 08:03:30.923: INFO: (19) /api/v1/namespaces/proxy-6759/services/proxy-service-tdtnf:portname1/proxy/: foo (200; 6.054652ms)
Aug 11 08:03:30.923: INFO: (19) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 6.090897ms)
Aug 11 08:03:30.923: INFO: (19) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:1080/proxy/: ... (200; 6.117917ms)
Aug 11 08:03:30.923: INFO: (19) /api/v1/namespaces/proxy-6759/services/http:proxy-service-tdtnf:portname2/proxy/: bar (200; 6.023459ms)
Aug 11 08:03:30.923: INFO: (19) /api/v1/namespaces/proxy-6759/pods/http:proxy-service-tdtnf-zhs7x:162/proxy/: bar (200; 6.006777ms)
Aug 11 08:03:30.923: INFO: (19) /api/v1/namespaces/proxy-6759/services/https:proxy-service-tdtnf:tlsportname2/proxy/: tls qux (200; 6.164627ms)
Aug 11 08:03:30.923: INFO: (19) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:1080/proxy/: test<... (200; 6.086406ms)
Aug 11 08:03:30.923: INFO: (19) /api/v1/namespaces/proxy-6759/pods/proxy-service-tdtnf-zhs7x:160/proxy/: foo (200; 6.053014ms)
STEP: deleting ReplicationController proxy-service-tdtnf in namespace proxy-6759, will wait for the garbage collector to delete the pods
Aug 11 08:03:30.981: INFO: Deleting ReplicationController proxy-service-tdtnf took: 6.320404ms
Aug 11 08:03:31.282: INFO: Terminating ReplicationController proxy-service-tdtnf pods took: 300.275612ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:03:45.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6759" for this suite.
Aug 11 08:03:51.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:03:51.187: INFO: namespace proxy-6759 deletion completed in 6.10010074s

• [SLOW TEST:31.648 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:03:51.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 11 08:03:51.242: INFO: Waiting up to 5m0s for pod "pod-430c832e-5aa6-499c-b1e4-65ccfe3514f2" in namespace "emptydir-6216" to be "success or failure"
Aug 11 08:03:51.245: INFO: Pod "pod-430c832e-5aa6-499c-b1e4-65ccfe3514f2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.603458ms
Aug 11 08:03:53.935: INFO: Pod "pod-430c832e-5aa6-499c-b1e4-65ccfe3514f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.693565818s
Aug 11 08:03:55.939: INFO: Pod "pod-430c832e-5aa6-499c-b1e4-65ccfe3514f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.697697159s
STEP: Saw pod success
Aug 11 08:03:55.939: INFO: Pod "pod-430c832e-5aa6-499c-b1e4-65ccfe3514f2" satisfied condition "success or failure"
Aug 11 08:03:55.942: INFO: Trying to get logs from node iruya-worker2 pod pod-430c832e-5aa6-499c-b1e4-65ccfe3514f2 container test-container: 
STEP: delete the pod
Aug 11 08:03:55.989: INFO: Waiting for pod pod-430c832e-5aa6-499c-b1e4-65ccfe3514f2 to disappear
Aug 11 08:03:56.222: INFO: Pod pod-430c832e-5aa6-499c-b1e4-65ccfe3514f2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:03:56.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6216" for this suite.
Aug 11 08:04:02.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:04:02.334: INFO: namespace emptydir-6216 deletion completed in 6.108364587s

• [SLOW TEST:11.147 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:04:02.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-23227596-f23b-4bca-aeb0-e7c898e146f1
STEP: Creating a pod to test consume configMaps
Aug 11 08:04:02.464: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7575744a-21d6-4950-b356-e4cc11c348d5" in namespace "projected-5850" to be "success or failure"
Aug 11 08:04:02.472: INFO: Pod "pod-projected-configmaps-7575744a-21d6-4950-b356-e4cc11c348d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074634ms
Aug 11 08:04:04.475: INFO: Pod "pod-projected-configmaps-7575744a-21d6-4950-b356-e4cc11c348d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011904085s
Aug 11 08:04:06.479: INFO: Pod "pod-projected-configmaps-7575744a-21d6-4950-b356-e4cc11c348d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01591714s
STEP: Saw pod success
Aug 11 08:04:06.480: INFO: Pod "pod-projected-configmaps-7575744a-21d6-4950-b356-e4cc11c348d5" satisfied condition "success or failure"
Aug 11 08:04:06.482: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-7575744a-21d6-4950-b356-e4cc11c348d5 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 11 08:04:06.566: INFO: Waiting for pod pod-projected-configmaps-7575744a-21d6-4950-b356-e4cc11c348d5 to disappear
Aug 11 08:04:06.573: INFO: Pod pod-projected-configmaps-7575744a-21d6-4950-b356-e4cc11c348d5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:04:06.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5850" for this suite.
Aug 11 08:04:12.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:04:12.871: INFO: namespace projected-5850 deletion completed in 6.294915112s

• [SLOW TEST:10.535 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:04:12.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 11 08:04:12.960: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 11 08:04:12.966: INFO: Waiting for terminating namespaces to be deleted...
Aug 11 08:04:12.969: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 11 08:04:12.973: INFO: kindnet-k7tjm from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug 11 08:04:12.973: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 11 08:04:12.973: INFO: kube-proxy-jzrnl from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug 11 08:04:12.973: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 08:04:12.973: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 11 08:04:12.977: INFO: rally-91440b61-h4flkw2j from c-rally-91440b61-6adz9tfs started at 2020-08-11 08:02:56 +0000 UTC (1 container statuses recorded)
Aug 11 08:04:12.977: INFO: 	Container rally-91440b61-h4flkw2j ready: true, restart count 0
Aug 11 08:04:12.977: INFO: kube-proxy-9ktgx from kube-system started at 2020-07-19 21:16:10 +0000 UTC (1 container statuses recorded)
Aug 11 08:04:12.977: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 08:04:12.977: INFO: kindnet-8kg9z from kube-system started at 2020-07-19 21:16:09 +0000 UTC (1 container statuses recorded)
Aug 11 08:04:12.977: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162a280cbaaf8b61], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:04:13.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2440" for this suite.
Aug 11 08:04:20.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:04:20.084: INFO: namespace sched-pred-2440 deletion completed in 6.084851854s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.214 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:04:20.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-6b529ddc-1c82-44a7-973d-907d35e84c51
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-6b529ddc-1c82-44a7-973d-907d35e84c51
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:05:26.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9849" for this suite.
Aug 11 08:05:48.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:05:48.760: INFO: namespace configmap-9849 deletion completed in 22.107052195s

• [SLOW TEST:88.675 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:05:48.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 11 08:05:56.906: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:05:56.944: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 08:05:58.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:05:58.948: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 08:06:00.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:06:00.950: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 08:06:02.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:06:02.949: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 08:06:04.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:06:04.949: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 08:06:06.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:06:06.949: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 08:06:08.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:06:08.949: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 08:06:10.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:06:10.949: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 08:06:12.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:06:12.949: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 08:06:14.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:06:14.947: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 08:06:16.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:06:16.949: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 08:06:18.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:06:18.949: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 08:06:20.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:06:20.948: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 08:06:22.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:06:22.948: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 08:06:24.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:06:24.948: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 11 08:06:26.944: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 11 08:06:26.949: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:06:26.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3434" for this suite.
Aug 11 08:06:48.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:06:49.044: INFO: namespace container-lifecycle-hook-3434 deletion completed in 22.086112982s

• [SLOW TEST:60.284 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:06:49.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 11 08:06:49.130: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:07:05.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2566" for this suite.
Aug 11 08:07:11.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:07:11.174: INFO: namespace pods-2566 deletion completed in 6.102561952s

• [SLOW TEST:22.130 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:07:11.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Aug 11 08:07:11.231: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix419744762/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:07:11.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-663" for this suite.
Aug 11 08:07:17.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:07:17.397: INFO: namespace kubectl-663 deletion completed in 6.088212322s

• [SLOW TEST:6.223 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:07:17.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-cb3d9a65-fc1a-4ec4-9850-5091757db080
STEP: Creating a pod to test consume configMaps
Aug 11 08:07:17.460: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cd7a2898-1de4-4a09-b356-c75e8e875212" in namespace "projected-6446" to be "success or failure"
Aug 11 08:07:17.465: INFO: Pod "pod-projected-configmaps-cd7a2898-1de4-4a09-b356-c75e8e875212": Phase="Pending", Reason="", readiness=false. Elapsed: 5.043685ms
Aug 11 08:07:19.469: INFO: Pod "pod-projected-configmaps-cd7a2898-1de4-4a09-b356-c75e8e875212": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009200638s
Aug 11 08:07:21.473: INFO: Pod "pod-projected-configmaps-cd7a2898-1de4-4a09-b356-c75e8e875212": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012849437s
STEP: Saw pod success
Aug 11 08:07:21.473: INFO: Pod "pod-projected-configmaps-cd7a2898-1de4-4a09-b356-c75e8e875212" satisfied condition "success or failure"
Aug 11 08:07:21.475: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-cd7a2898-1de4-4a09-b356-c75e8e875212 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 11 08:07:21.511: INFO: Waiting for pod pod-projected-configmaps-cd7a2898-1de4-4a09-b356-c75e8e875212 to disappear
Aug 11 08:07:21.524: INFO: Pod pod-projected-configmaps-cd7a2898-1de4-4a09-b356-c75e8e875212 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:07:21.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6446" for this suite.
Aug 11 08:07:27.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:07:27.616: INFO: namespace projected-6446 deletion completed in 6.088250288s

• [SLOW TEST:10.219 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:07:27.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 08:07:27.717: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 11 08:07:27.728: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:27.733: INFO: Number of nodes with available pods: 0
Aug 11 08:07:27.733: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 08:07:28.738: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:28.742: INFO: Number of nodes with available pods: 0
Aug 11 08:07:28.742: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 08:07:30.089: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:30.091: INFO: Number of nodes with available pods: 0
Aug 11 08:07:30.091: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 08:07:30.757: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:30.760: INFO: Number of nodes with available pods: 0
Aug 11 08:07:30.760: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 08:07:31.738: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:31.741: INFO: Number of nodes with available pods: 0
Aug 11 08:07:31.741: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 08:07:32.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:32.742: INFO: Number of nodes with available pods: 2
Aug 11 08:07:32.742: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 11 08:07:32.775: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:32.775: INFO: Wrong image for pod: daemon-set-zvbpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:32.794: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:33.797: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:33.797: INFO: Wrong image for pod: daemon-set-zvbpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:33.800: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:34.799: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:34.799: INFO: Wrong image for pod: daemon-set-zvbpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:34.803: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:35.805: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:35.805: INFO: Wrong image for pod: daemon-set-zvbpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:35.805: INFO: Pod daemon-set-zvbpn is not available
Aug 11 08:07:35.809: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:36.798: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:36.798: INFO: Wrong image for pod: daemon-set-zvbpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:36.798: INFO: Pod daemon-set-zvbpn is not available
Aug 11 08:07:36.801: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:37.812: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:37.812: INFO: Wrong image for pod: daemon-set-zvbpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:37.812: INFO: Pod daemon-set-zvbpn is not available
Aug 11 08:07:37.816: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:38.799: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:38.799: INFO: Wrong image for pod: daemon-set-zvbpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:38.799: INFO: Pod daemon-set-zvbpn is not available
Aug 11 08:07:38.804: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:39.799: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:39.799: INFO: Wrong image for pod: daemon-set-zvbpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:39.799: INFO: Pod daemon-set-zvbpn is not available
Aug 11 08:07:39.803: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:40.800: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:40.800: INFO: Wrong image for pod: daemon-set-zvbpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:40.800: INFO: Pod daemon-set-zvbpn is not available
Aug 11 08:07:40.804: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:41.798: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:41.798: INFO: Wrong image for pod: daemon-set-zvbpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:41.798: INFO: Pod daemon-set-zvbpn is not available
Aug 11 08:07:41.802: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:42.799: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:42.799: INFO: Wrong image for pod: daemon-set-zvbpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:42.799: INFO: Pod daemon-set-zvbpn is not available
Aug 11 08:07:42.803: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:43.799: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:43.799: INFO: Wrong image for pod: daemon-set-zvbpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:43.799: INFO: Pod daemon-set-zvbpn is not available
Aug 11 08:07:43.811: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:44.798: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:44.798: INFO: Wrong image for pod: daemon-set-zvbpn. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:44.798: INFO: Pod daemon-set-zvbpn is not available
Aug 11 08:07:44.802: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:45.798: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:45.798: INFO: Pod daemon-set-h4wz2 is not available
Aug 11 08:07:45.803: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:46.799: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:46.799: INFO: Pod daemon-set-h4wz2 is not available
Aug 11 08:07:46.803: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:47.812: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:47.812: INFO: Pod daemon-set-h4wz2 is not available
Aug 11 08:07:47.816: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:48.817: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:48.831: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:49.798: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:49.802: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:50.798: INFO: Wrong image for pod: daemon-set-45wqp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 11 08:07:50.798: INFO: Pod daemon-set-45wqp is not available
Aug 11 08:07:50.802: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:51.799: INFO: Pod daemon-set-hxvpt is not available
Aug 11 08:07:51.804: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 11 08:07:51.808: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:51.817: INFO: Number of nodes with available pods: 1
Aug 11 08:07:51.817: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 08:07:52.822: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:52.824: INFO: Number of nodes with available pods: 1
Aug 11 08:07:52.824: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 08:07:53.837: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:53.839: INFO: Number of nodes with available pods: 1
Aug 11 08:07:53.839: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 08:07:54.822: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:54.825: INFO: Number of nodes with available pods: 1
Aug 11 08:07:54.825: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 08:07:55.822: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 08:07:55.847: INFO: Number of nodes with available pods: 2
Aug 11 08:07:55.847: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5698, will wait for the garbage collector to delete the pods
Aug 11 08:07:55.919: INFO: Deleting DaemonSet.extensions daemon-set took: 5.912389ms
Aug 11 08:07:56.220: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.35225ms
Aug 11 08:08:05.123: INFO: Number of nodes with available pods: 0
Aug 11 08:08:05.123: INFO: Number of running nodes: 0, number of available pods: 0
Aug 11 08:08:05.126: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5698/daemonsets","resourceVersion":"4149462"},"items":null}

Aug 11 08:08:05.135: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5698/pods","resourceVersion":"4149463"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:08:05.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5698" for this suite.
Aug 11 08:08:11.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:08:11.274: INFO: namespace daemonsets-5698 deletion completed in 6.127902639s

• [SLOW TEST:43.657 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:08:11.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 08:08:31.358: INFO: Container started at 2020-08-11 08:08:13 +0000 UTC, pod became ready at 2020-08-11 08:08:30 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:08:31.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5870" for this suite.
Aug 11 08:08:53.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:08:53.481: INFO: namespace container-probe-5870 deletion completed in 22.119884889s

• [SLOW TEST:42.207 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:08:53.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Aug 11 08:08:53.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8651'
Aug 11 08:08:56.484: INFO: stderr: ""
Aug 11 08:08:56.484: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 11 08:08:56.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8651'
Aug 11 08:08:56.634: INFO: stderr: ""
Aug 11 08:08:56.634: INFO: stdout: "update-demo-nautilus-6j62g update-demo-nautilus-j2c5x "
Aug 11 08:08:56.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6j62g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8651'
Aug 11 08:08:56.726: INFO: stderr: ""
Aug 11 08:08:56.726: INFO: stdout: ""
Aug 11 08:08:56.727: INFO: update-demo-nautilus-6j62g is created but not running
Aug 11 08:09:01.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8651'
Aug 11 08:09:01.827: INFO: stderr: ""
Aug 11 08:09:01.827: INFO: stdout: "update-demo-nautilus-6j62g update-demo-nautilus-j2c5x "
Aug 11 08:09:01.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6j62g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8651'
Aug 11 08:09:01.914: INFO: stderr: ""
Aug 11 08:09:01.914: INFO: stdout: "true"
Aug 11 08:09:01.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6j62g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8651'
Aug 11 08:09:02.003: INFO: stderr: ""
Aug 11 08:09:02.003: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 08:09:02.003: INFO: validating pod update-demo-nautilus-6j62g
Aug 11 08:09:02.006: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 08:09:02.006: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 08:09:02.006: INFO: update-demo-nautilus-6j62g is verified up and running
Aug 11 08:09:02.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j2c5x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8651'
Aug 11 08:09:02.100: INFO: stderr: ""
Aug 11 08:09:02.100: INFO: stdout: "true"
Aug 11 08:09:02.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j2c5x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8651'
Aug 11 08:09:02.192: INFO: stderr: ""
Aug 11 08:09:02.192: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 08:09:02.192: INFO: validating pod update-demo-nautilus-j2c5x
Aug 11 08:09:02.195: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 08:09:02.196: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 08:09:02.196: INFO: update-demo-nautilus-j2c5x is verified up and running
STEP: rolling-update to new replication controller
Aug 11 08:09:02.197: INFO: scanned /root for discovery docs: 
Aug 11 08:09:02.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8651'
Aug 11 08:09:24.743: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 11 08:09:24.743: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 11 08:09:24.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8651'
Aug 11 08:09:24.846: INFO: stderr: ""
Aug 11 08:09:24.846: INFO: stdout: "update-demo-kitten-j5bw9 update-demo-kitten-zwdnc update-demo-nautilus-j2c5x "
STEP: Replicas for name=update-demo: expected=2 actual=3
Aug 11 08:09:29.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8651'
Aug 11 08:09:29.945: INFO: stderr: ""
Aug 11 08:09:29.945: INFO: stdout: "update-demo-kitten-j5bw9 update-demo-kitten-zwdnc "
Aug 11 08:09:29.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j5bw9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8651'
Aug 11 08:09:30.039: INFO: stderr: ""
Aug 11 08:09:30.039: INFO: stdout: "true"
Aug 11 08:09:30.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j5bw9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8651'
Aug 11 08:09:30.131: INFO: stderr: ""
Aug 11 08:09:30.131: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 11 08:09:30.131: INFO: validating pod update-demo-kitten-j5bw9
Aug 11 08:09:30.135: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 11 08:09:30.135: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 11 08:09:30.135: INFO: update-demo-kitten-j5bw9 is verified up and running
Aug 11 08:09:30.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zwdnc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8651'
Aug 11 08:09:30.231: INFO: stderr: ""
Aug 11 08:09:30.231: INFO: stdout: "true"
Aug 11 08:09:30.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zwdnc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8651'
Aug 11 08:09:30.317: INFO: stderr: ""
Aug 11 08:09:30.317: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 11 08:09:30.317: INFO: validating pod update-demo-kitten-zwdnc
Aug 11 08:09:30.321: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 11 08:09:30.321: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 11 08:09:30.321: INFO: update-demo-kitten-zwdnc is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:09:30.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8651" for this suite.
Aug 11 08:09:52.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:09:52.410: INFO: namespace kubectl-8651 deletion completed in 22.085356137s

• [SLOW TEST:58.928 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:09:52.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 11 08:09:52.475: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a03bc17-2715-46f6-ab00-80745cc7fdfc" in namespace "downward-api-8784" to be "success or failure"
Aug 11 08:09:52.479: INFO: Pod "downwardapi-volume-5a03bc17-2715-46f6-ab00-80745cc7fdfc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.257095ms
Aug 11 08:09:54.483: INFO: Pod "downwardapi-volume-5a03bc17-2715-46f6-ab00-80745cc7fdfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007776841s
Aug 11 08:09:56.487: INFO: Pod "downwardapi-volume-5a03bc17-2715-46f6-ab00-80745cc7fdfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011189855s
STEP: Saw pod success
Aug 11 08:09:56.487: INFO: Pod "downwardapi-volume-5a03bc17-2715-46f6-ab00-80745cc7fdfc" satisfied condition "success or failure"
Aug 11 08:09:56.490: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-5a03bc17-2715-46f6-ab00-80745cc7fdfc container client-container: 
STEP: delete the pod
Aug 11 08:09:56.696: INFO: Waiting for pod downwardapi-volume-5a03bc17-2715-46f6-ab00-80745cc7fdfc to disappear
Aug 11 08:09:56.779: INFO: Pod downwardapi-volume-5a03bc17-2715-46f6-ab00-80745cc7fdfc no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:09:56.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8784" for this suite.
Aug 11 08:10:02.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:10:02.948: INFO: namespace downward-api-8784 deletion completed in 6.164206044s

• [SLOW TEST:10.538 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:10:02.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 08:10:03.060: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 11 08:10:08.065: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 11 08:10:08.065: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 11 08:10:08.109: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2059,SelfLink:/apis/apps/v1/namespaces/deployment-2059/deployments/test-cleanup-deployment,UID:13281d42-733e-4987-b99b-6b92a40fd4f2,ResourceVersion:4149929,Generation:1,CreationTimestamp:2020-08-11 08:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Aug 11 08:10:08.121: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2059,SelfLink:/apis/apps/v1/namespaces/deployment-2059/replicasets/test-cleanup-deployment-55bbcbc84c,UID:8697592e-a095-497b-88cd-32783fbda0ba,ResourceVersion:4149931,Generation:1,CreationTimestamp:2020-08-11 08:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 13281d42-733e-4987-b99b-6b92a40fd4f2 0xc00272faa7 0xc00272faa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 11 08:10:08.121: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Aug 11 08:10:08.121: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2059,SelfLink:/apis/apps/v1/namespaces/deployment-2059/replicasets/test-cleanup-controller,UID:9c3501ac-8224-4388-b81f-b7bdea9333b1,ResourceVersion:4149930,Generation:1,CreationTimestamp:2020-08-11 08:10:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 13281d42-733e-4987-b99b-6b92a40fd4f2 0xc00272f9d7 0xc00272f9d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 11 08:10:08.151: INFO: Pod "test-cleanup-controller-zw7r9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-zw7r9,GenerateName:test-cleanup-controller-,Namespace:deployment-2059,SelfLink:/api/v1/namespaces/deployment-2059/pods/test-cleanup-controller-zw7r9,UID:6cfd71a9-2e14-4c90-9e66-862806592c56,ResourceVersion:4149924,Generation:0,CreationTimestamp:2020-08-11 08:10:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 9c3501ac-8224-4388-b81f-b7bdea9333b1 0xc002efc357 0xc002efc358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z65mb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z65mb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-z65mb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002efc3d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002efc3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:10:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:10:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:10:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:10:03 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.84,StartTime:2020-08-11 08:10:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-11 08:10:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://430bcde8804526d84a0110891de94d3ad47b0eb7a89c09c9a0b37ae4ccfdd613}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 08:10:08.151: INFO: Pod "test-cleanup-deployment-55bbcbc84c-x4gtn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-x4gtn,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2059,SelfLink:/api/v1/namespaces/deployment-2059/pods/test-cleanup-deployment-55bbcbc84c-x4gtn,UID:699f21ad-bea2-476b-9b61-a511905ddcac,ResourceVersion:4149935,Generation:0,CreationTimestamp:2020-08-11 08:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 8697592e-a095-497b-88cd-32783fbda0ba 0xc002efc4d7 0xc002efc4d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z65mb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z65mb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-z65mb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002efc550} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002efc570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:10:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:10:08.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2059" for this suite.
Aug 11 08:10:14.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:10:14.327: INFO: namespace deployment-2059 deletion completed in 6.123568292s

• [SLOW TEST:11.378 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:10:14.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-bf09b8d9-ed96-4546-83d4-2f19c9566ce8
STEP: Creating a pod to test consume secrets
Aug 11 08:10:14.416: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b656fcec-8df7-484d-ad3b-d3ae0781b446" in namespace "projected-5326" to be "success or failure"
Aug 11 08:10:14.419: INFO: Pod "pod-projected-secrets-b656fcec-8df7-484d-ad3b-d3ae0781b446": Phase="Pending", Reason="", readiness=false. Elapsed: 3.318272ms
Aug 11 08:10:16.423: INFO: Pod "pod-projected-secrets-b656fcec-8df7-484d-ad3b-d3ae0781b446": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00731237s
Aug 11 08:10:18.427: INFO: Pod "pod-projected-secrets-b656fcec-8df7-484d-ad3b-d3ae0781b446": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011209229s
STEP: Saw pod success
Aug 11 08:10:18.427: INFO: Pod "pod-projected-secrets-b656fcec-8df7-484d-ad3b-d3ae0781b446" satisfied condition "success or failure"
Aug 11 08:10:18.430: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-b656fcec-8df7-484d-ad3b-d3ae0781b446 container secret-volume-test: 
STEP: delete the pod
Aug 11 08:10:18.540: INFO: Waiting for pod pod-projected-secrets-b656fcec-8df7-484d-ad3b-d3ae0781b446 to disappear
Aug 11 08:10:18.551: INFO: Pod pod-projected-secrets-b656fcec-8df7-484d-ad3b-d3ae0781b446 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:10:18.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5326" for this suite.
Aug 11 08:10:24.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:10:24.655: INFO: namespace projected-5326 deletion completed in 6.100026203s

• [SLOW TEST:10.328 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:10:24.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-192e2152-0c3b-4773-9c33-add3d3730f39 in namespace container-probe-9996
Aug 11 08:10:28.784: INFO: Started pod busybox-192e2152-0c3b-4773-9c33-add3d3730f39 in namespace container-probe-9996
STEP: checking the pod's current state and verifying that restartCount is present
Aug 11 08:10:28.787: INFO: Initial restart count of pod busybox-192e2152-0c3b-4773-9c33-add3d3730f39 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:14:29.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9996" for this suite.
Aug 11 08:14:35.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:14:35.431: INFO: namespace container-probe-9996 deletion completed in 6.11224573s

• [SLOW TEST:250.776 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:14:35.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 11 08:14:40.040: INFO: Successfully updated pod "pod-update-ae1c0f54-ad25-46a4-a6f0-a5da7b5ded74"
STEP: verifying the updated pod is in kubernetes
Aug 11 08:14:40.068: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:14:40.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6066" for this suite.
Aug 11 08:15:02.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:15:02.186: INFO: namespace pods-6066 deletion completed in 22.113715771s

• [SLOW TEST:26.755 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:15:02.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 11 08:15:06.373: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:15:06.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3582" for this suite.
Aug 11 08:15:12.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:15:12.510: INFO: namespace container-runtime-3582 deletion completed in 6.086482033s

• [SLOW TEST:10.323 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:15:12.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 11 08:15:16.730: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:15:16.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3614" for this suite.
Aug 11 08:15:22.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:15:22.934: INFO: namespace container-runtime-3614 deletion completed in 6.14417181s

• [SLOW TEST:10.423 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:15:22.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 08:15:23.024: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 11 08:15:23.031: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 11 08:15:28.035: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 11 08:15:28.035: INFO: Creating deployment "test-rolling-update-deployment"
Aug 11 08:15:28.039: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 11 08:15:28.080: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 11 08:15:30.089: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 11 08:15:30.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732730528, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732730528, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732730528, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732730528, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 08:15:32.095: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 11 08:15:32.105: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-1646,SelfLink:/apis/apps/v1/namespaces/deployment-1646/deployments/test-rolling-update-deployment,UID:b346e69e-b7d7-4b1f-94a1-30c143cc9381,ResourceVersion:4150768,Generation:1,CreationTimestamp:2020-08-11 08:15:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-11 08:15:28 +0000 UTC 2020-08-11 08:15:28 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-11 08:15:31 +0000 UTC 2020-08-11 08:15:28 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 11 08:15:32.108: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-1646,SelfLink:/apis/apps/v1/namespaces/deployment-1646/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:9425cc9b-5069-40ea-a69e-1a3f7643842a,ResourceVersion:4150757,Generation:1,CreationTimestamp:2020-08-11 08:15:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b346e69e-b7d7-4b1f-94a1-30c143cc9381 0xc002faf967 0xc002faf968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 11 08:15:32.108: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 11 08:15:32.108: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-1646,SelfLink:/apis/apps/v1/namespaces/deployment-1646/replicasets/test-rolling-update-controller,UID:68f44301-48d3-4607-8035-748dcc11bebe,ResourceVersion:4150766,Generation:2,CreationTimestamp:2020-08-11 08:15:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b346e69e-b7d7-4b1f-94a1-30c143cc9381 0xc002faf897 0xc002faf898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 11 08:15:32.111: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-9prnd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-9prnd,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-1646,SelfLink:/api/v1/namespaces/deployment-1646/pods/test-rolling-update-deployment-79f6b9d75c-9prnd,UID:0005ab66-2fe9-4eee-bcd1-0b4b26b4b2ec,ResourceVersion:4150756,Generation:0,CreationTimestamp:2020-08-11 08:15:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 9425cc9b-5069-40ea-a69e-1a3f7643842a 0xc001bc9a27 0xc001bc9a28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6bgpl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6bgpl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-6bgpl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001bc9aa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001bc9ac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:15:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:15:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:15:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:15:28 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.89,StartTime:2020-08-11 08:15:28 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-11 08:15:30 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://35dc6cf2b99b9731de3c8b6f7acff673e6e78d1c519ea2feee904f1cae15dd49}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:15:32.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1646" for this suite.
Aug 11 08:15:38.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:15:38.202: INFO: namespace deployment-1646 deletion completed in 6.087642981s

• [SLOW TEST:15.268 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:15:38.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 11 08:15:38.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-acdfb17e-fd81-4c70-b3cd-33be0bf0e1fb" in namespace "projected-7730" to be "success or failure"
Aug 11 08:15:38.445: INFO: Pod "downwardapi-volume-acdfb17e-fd81-4c70-b3cd-33be0bf0e1fb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.592318ms
Aug 11 08:15:40.449: INFO: Pod "downwardapi-volume-acdfb17e-fd81-4c70-b3cd-33be0bf0e1fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023639197s
Aug 11 08:15:42.453: INFO: Pod "downwardapi-volume-acdfb17e-fd81-4c70-b3cd-33be0bf0e1fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027454797s
STEP: Saw pod success
Aug 11 08:15:42.453: INFO: Pod "downwardapi-volume-acdfb17e-fd81-4c70-b3cd-33be0bf0e1fb" satisfied condition "success or failure"
Aug 11 08:15:42.456: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-acdfb17e-fd81-4c70-b3cd-33be0bf0e1fb container client-container: 
STEP: delete the pod
Aug 11 08:15:42.497: INFO: Waiting for pod downwardapi-volume-acdfb17e-fd81-4c70-b3cd-33be0bf0e1fb to disappear
Aug 11 08:15:42.510: INFO: Pod downwardapi-volume-acdfb17e-fd81-4c70-b3cd-33be0bf0e1fb no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:15:42.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7730" for this suite.
Aug 11 08:15:48.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:15:48.641: INFO: namespace projected-7730 deletion completed in 6.126305663s

• [SLOW TEST:10.438 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:15:48.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 11 08:15:48.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8235'
Aug 11 08:15:48.826: INFO: stderr: ""
Aug 11 08:15:48.826: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug 11 08:15:48.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8235'
Aug 11 08:15:53.994: INFO: stderr: ""
Aug 11 08:15:53.994: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:15:53.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8235" for this suite.
Aug 11 08:16:00.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:16:00.614: INFO: namespace kubectl-8235 deletion completed in 6.147465244s

• [SLOW TEST:11.973 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:16:00.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:16:30.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5224" for this suite.
Aug 11 08:16:36.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:16:36.786: INFO: namespace container-runtime-5224 deletion completed in 6.111460465s

• [SLOW TEST:36.172 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:16:36.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 11 08:16:36.874: INFO: Waiting up to 5m0s for pod "pod-feb30b1b-5ee4-43de-b5d6-d3bc9170485d" in namespace "emptydir-6097" to be "success or failure"
Aug 11 08:16:36.886: INFO: Pod "pod-feb30b1b-5ee4-43de-b5d6-d3bc9170485d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.163677ms
Aug 11 08:16:38.981: INFO: Pod "pod-feb30b1b-5ee4-43de-b5d6-d3bc9170485d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106877327s
Aug 11 08:16:40.985: INFO: Pod "pod-feb30b1b-5ee4-43de-b5d6-d3bc9170485d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110951475s
STEP: Saw pod success
Aug 11 08:16:40.985: INFO: Pod "pod-feb30b1b-5ee4-43de-b5d6-d3bc9170485d" satisfied condition "success or failure"
Aug 11 08:16:40.988: INFO: Trying to get logs from node iruya-worker pod pod-feb30b1b-5ee4-43de-b5d6-d3bc9170485d container test-container: 
STEP: delete the pod
Aug 11 08:16:41.013: INFO: Waiting for pod pod-feb30b1b-5ee4-43de-b5d6-d3bc9170485d to disappear
Aug 11 08:16:41.018: INFO: Pod pod-feb30b1b-5ee4-43de-b5d6-d3bc9170485d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:16:41.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6097" for this suite.
Aug 11 08:16:47.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:16:47.111: INFO: namespace emptydir-6097 deletion completed in 6.089662815s

• [SLOW TEST:10.324 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:16:47.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9775
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Aug 11 08:16:47.243: INFO: Found 0 stateful pods, waiting for 3
Aug 11 08:16:57.248: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 08:16:57.248: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 08:16:57.248: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 11 08:17:07.248: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 08:17:07.248: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 08:17:07.248: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 08:17:07.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9775 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 11 08:17:07.537: INFO: stderr: "I0811 08:17:07.398352     592 log.go:172] (0xc0001166e0) (0xc000278820) Create stream\nI0811 08:17:07.398405     592 log.go:172] (0xc0001166e0) (0xc000278820) Stream added, broadcasting: 1\nI0811 08:17:07.400919     592 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0811 08:17:07.400968     592 log.go:172] (0xc0001166e0) (0xc000108280) Create stream\nI0811 08:17:07.400981     592 log.go:172] (0xc0001166e0) (0xc000108280) Stream added, broadcasting: 3\nI0811 08:17:07.402036     592 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0811 08:17:07.402084     592 log.go:172] (0xc0001166e0) (0xc0006fe000) Create stream\nI0811 08:17:07.402105     592 log.go:172] (0xc0001166e0) (0xc0006fe000) Stream added, broadcasting: 5\nI0811 08:17:07.402963     592 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0811 08:17:07.475811     592 log.go:172] (0xc0001166e0) Data frame received for 5\nI0811 08:17:07.475832     592 log.go:172] (0xc0006fe000) (5) Data frame handling\nI0811 08:17:07.475844     592 log.go:172] (0xc0006fe000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0811 08:17:07.528334     592 log.go:172] (0xc0001166e0) Data frame received for 3\nI0811 08:17:07.528359     592 log.go:172] (0xc000108280) (3) Data frame handling\nI0811 08:17:07.528374     592 log.go:172] (0xc000108280) (3) Data frame sent\nI0811 08:17:07.529012     592 log.go:172] (0xc0001166e0) Data frame received for 3\nI0811 08:17:07.529031     592 log.go:172] (0xc000108280) (3) Data frame handling\nI0811 08:17:07.529100     592 log.go:172] (0xc0001166e0) Data frame received for 5\nI0811 08:17:07.529111     592 log.go:172] (0xc0006fe000) (5) Data frame handling\nI0811 08:17:07.531073     592 log.go:172] (0xc0001166e0) Data frame received for 1\nI0811 08:17:07.531103     592 log.go:172] (0xc000278820) (1) Data frame handling\nI0811 08:17:07.531117     592 log.go:172] (0xc000278820) (1) Data frame sent\nI0811 08:17:07.531132     592 log.go:172] (0xc0001166e0) (0xc000278820) Stream removed, broadcasting: 1\nI0811 08:17:07.531646     592 log.go:172] (0xc0001166e0) Go away received\nI0811 08:17:07.531817     592 log.go:172] (0xc0001166e0) (0xc000278820) Stream removed, broadcasting: 1\nI0811 08:17:07.531836     592 log.go:172] (0xc0001166e0) (0xc000108280) Stream removed, broadcasting: 3\nI0811 08:17:07.531846     592 log.go:172] (0xc0001166e0) (0xc0006fe000) Stream removed, broadcasting: 5\n"
Aug 11 08:17:07.537: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 11 08:17:07.537: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 11 08:17:17.584: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 11 08:17:27.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9775 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 08:17:27.829: INFO: stderr: "I0811 08:17:27.749730     612 log.go:172] (0xc0007ca630) (0xc0006ba820) Create stream\nI0811 08:17:27.749792     612 log.go:172] (0xc0007ca630) (0xc0006ba820) Stream added, broadcasting: 1\nI0811 08:17:27.755091     612 log.go:172] (0xc0007ca630) Reply frame received for 1\nI0811 08:17:27.755168     612 log.go:172] (0xc0007ca630) (0xc0008be000) Create stream\nI0811 08:17:27.755191     612 log.go:172] (0xc0007ca630) (0xc0008be000) Stream added, broadcasting: 3\nI0811 08:17:27.756436     612 log.go:172] (0xc0007ca630) Reply frame received for 3\nI0811 08:17:27.756494     612 log.go:172] (0xc0007ca630) (0xc0009b2000) Create stream\nI0811 08:17:27.756520     612 log.go:172] (0xc0007ca630) (0xc0009b2000) Stream added, broadcasting: 5\nI0811 08:17:27.757651     612 log.go:172] (0xc0007ca630) Reply frame received for 5\nI0811 08:17:27.821700     612 log.go:172] (0xc0007ca630) Data frame received for 5\nI0811 08:17:27.821760     612 log.go:172] (0xc0009b2000) (5) Data frame handling\nI0811 08:17:27.821784     612 log.go:172] (0xc0009b2000) (5) Data frame sent\nI0811 08:17:27.821796     612 log.go:172] (0xc0007ca630) Data frame received for 5\nI0811 08:17:27.821804     612 log.go:172] (0xc0009b2000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0811 08:17:27.821846     612 log.go:172] (0xc0007ca630) Data frame received for 3\nI0811 08:17:27.821871     612 log.go:172] (0xc0008be000) (3) Data frame handling\nI0811 08:17:27.821885     612 log.go:172] (0xc0008be000) (3) Data frame sent\nI0811 08:17:27.821895     612 log.go:172] (0xc0007ca630) Data frame received for 3\nI0811 08:17:27.821907     612 log.go:172] (0xc0008be000) (3) Data frame handling\nI0811 08:17:27.823168     612 log.go:172] (0xc0007ca630) Data frame received for 1\nI0811 08:17:27.823190     612 log.go:172] (0xc0006ba820) (1) Data frame handling\nI0811 08:17:27.823200     612 log.go:172] (0xc0006ba820) (1) Data frame sent\nI0811 08:17:27.823210     612 log.go:172] (0xc0007ca630) (0xc0006ba820) Stream removed, broadcasting: 1\nI0811 08:17:27.823222     612 log.go:172] (0xc0007ca630) Go away received\nI0811 08:17:27.824169     612 log.go:172] (0xc0007ca630) (0xc0006ba820) Stream removed, broadcasting: 1\nI0811 08:17:27.824207     612 log.go:172] (0xc0007ca630) (0xc0008be000) Stream removed, broadcasting: 3\nI0811 08:17:27.824230     612 log.go:172] (0xc0007ca630) (0xc0009b2000) Stream removed, broadcasting: 5\n"
Aug 11 08:17:27.829: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 11 08:17:27.829: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 11 08:17:37.855: INFO: Waiting for StatefulSet statefulset-9775/ss2 to complete update
Aug 11 08:17:37.855: INFO: Waiting for Pod statefulset-9775/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 11 08:17:37.855: INFO: Waiting for Pod statefulset-9775/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 11 08:17:47.863: INFO: Waiting for StatefulSet statefulset-9775/ss2 to complete update
Aug 11 08:17:47.863: INFO: Waiting for Pod statefulset-9775/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Aug 11 08:17:57.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9775 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 11 08:17:58.105: INFO: stderr: "I0811 08:17:58.000836     637 log.go:172] (0xc00013adc0) (0xc0003bc820) Create stream\nI0811 08:17:58.000915     637 log.go:172] (0xc00013adc0) (0xc0003bc820) Stream added, broadcasting: 1\nI0811 08:17:58.006545     637 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0811 08:17:58.006599     637 log.go:172] (0xc00013adc0) (0xc0005d6280) Create stream\nI0811 08:17:58.006619     637 log.go:172] (0xc00013adc0) (0xc0005d6280) Stream added, broadcasting: 3\nI0811 08:17:58.007759     637 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0811 08:17:58.007809     637 log.go:172] (0xc00013adc0) (0xc0005d6320) Create stream\nI0811 08:17:58.007824     637 log.go:172] (0xc00013adc0) (0xc0005d6320) Stream added, broadcasting: 5\nI0811 08:17:58.009141     637 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0811 08:17:58.073177     637 log.go:172] (0xc00013adc0) Data frame received for 5\nI0811 08:17:58.073221     637 log.go:172] (0xc0005d6320) (5) Data frame handling\nI0811 08:17:58.073243     637 log.go:172] (0xc0005d6320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0811 08:17:58.099142     637 log.go:172] (0xc00013adc0) Data frame received for 3\nI0811 08:17:58.099183     637 log.go:172] (0xc0005d6280) (3) Data frame handling\nI0811 08:17:58.099203     637 log.go:172] (0xc0005d6280) (3) Data frame sent\nI0811 08:17:58.099215     637 log.go:172] (0xc00013adc0) Data frame received for 3\nI0811 08:17:58.099222     637 log.go:172] (0xc0005d6280) (3) Data frame handling\nI0811 08:17:58.099318     637 log.go:172] (0xc00013adc0) Data frame received for 5\nI0811 08:17:58.099335     637 log.go:172] (0xc0005d6320) (5) Data frame handling\nI0811 08:17:58.101559     637 log.go:172] (0xc00013adc0) Data frame received for 1\nI0811 08:17:58.101581     637 log.go:172] (0xc0003bc820) (1) Data frame handling\nI0811 08:17:58.101591     637 log.go:172] (0xc0003bc820) (1) Data frame sent\nI0811 08:17:58.101603     637 log.go:172] (0xc00013adc0) (0xc0003bc820) Stream removed, broadcasting: 1\nI0811 08:17:58.101624     637 log.go:172] (0xc00013adc0) Go away received\nI0811 08:17:58.101930     637 log.go:172] (0xc00013adc0) (0xc0003bc820) Stream removed, broadcasting: 1\nI0811 08:17:58.101946     637 log.go:172] (0xc00013adc0) (0xc0005d6280) Stream removed, broadcasting: 3\nI0811 08:17:58.101952     637 log.go:172] (0xc00013adc0) (0xc0005d6320) Stream removed, broadcasting: 5\n"
Aug 11 08:17:58.106: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 11 08:17:58.106: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 11 08:18:08.162: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 11 08:18:18.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9775 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 08:18:18.415: INFO: stderr: "I0811 08:18:18.334669     659 log.go:172] (0xc000aa6420) (0xc000934640) Create stream\nI0811 08:18:18.334755     659 log.go:172] (0xc000aa6420) (0xc000934640) Stream added, broadcasting: 1\nI0811 08:18:18.337691     659 log.go:172] (0xc000aa6420) Reply frame received for 1\nI0811 08:18:18.337745     659 log.go:172] (0xc000aa6420) (0xc0009a8000) Create stream\nI0811 08:18:18.337762     659 log.go:172] (0xc000aa6420) (0xc0009a8000) Stream added, broadcasting: 3\nI0811 08:18:18.338888     659 log.go:172] (0xc000aa6420) Reply frame received for 3\nI0811 08:18:18.338935     659 log.go:172] (0xc000aa6420) (0xc0009a80a0) Create stream\nI0811 08:18:18.338948     659 log.go:172] (0xc000aa6420) (0xc0009a80a0) Stream added, broadcasting: 5\nI0811 08:18:18.340162     659 log.go:172] (0xc000aa6420) Reply frame received for 5\nI0811 08:18:18.408903     659 log.go:172] (0xc000aa6420) Data frame received for 5\nI0811 08:18:18.408938     659 log.go:172] (0xc0009a80a0) (5) Data frame handling\nI0811 08:18:18.408950     659 log.go:172] (0xc0009a80a0) (5) Data frame sent\nI0811 08:18:18.408958     659 log.go:172] (0xc000aa6420) Data frame received for 5\nI0811 08:18:18.408965     659 log.go:172] (0xc0009a80a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0811 08:18:18.409023     659 log.go:172] (0xc000aa6420) Data frame received for 3\nI0811 08:18:18.409069     659 log.go:172] (0xc0009a8000) (3) Data frame handling\nI0811 08:18:18.409089     659 log.go:172] (0xc0009a8000) (3) Data frame sent\nI0811 08:18:18.409099     659 log.go:172] (0xc000aa6420) Data frame received for 3\nI0811 08:18:18.409109     659 log.go:172] (0xc0009a8000) (3) Data frame handling\nI0811 08:18:18.410546     659 log.go:172] (0xc000aa6420) Data frame received for 1\nI0811 08:18:18.410580     659 log.go:172] (0xc000934640) (1) Data frame handling\nI0811 08:18:18.410606     659 log.go:172] (0xc000934640) (1) Data frame sent\nI0811 08:18:18.410657     659 log.go:172] (0xc000aa6420) (0xc000934640) Stream removed, broadcasting: 1\nI0811 08:18:18.410699     659 log.go:172] (0xc000aa6420) Go away received\nI0811 08:18:18.411142     659 log.go:172] (0xc000aa6420) (0xc000934640) Stream removed, broadcasting: 1\nI0811 08:18:18.411185     659 log.go:172] (0xc000aa6420) (0xc0009a8000) Stream removed, broadcasting: 3\nI0811 08:18:18.411202     659 log.go:172] (0xc000aa6420) (0xc0009a80a0) Stream removed, broadcasting: 5\n"
Aug 11 08:18:18.415: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 11 08:18:18.415: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 11 08:18:28.437: INFO: Waiting for StatefulSet statefulset-9775/ss2 to complete update
Aug 11 08:18:28.437: INFO: Waiting for Pod statefulset-9775/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 11 08:18:28.437: INFO: Waiting for Pod statefulset-9775/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 11 08:18:38.443: INFO: Waiting for StatefulSet statefulset-9775/ss2 to complete update
Aug 11 08:18:38.443: INFO: Waiting for Pod statefulset-9775/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 11 08:18:48.458: INFO: Waiting for StatefulSet statefulset-9775/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 11 08:18:58.445: INFO: Deleting all statefulset in ns statefulset-9775
Aug 11 08:18:58.448: INFO: Scaling statefulset ss2 to 0
Aug 11 08:19:28.494: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 08:19:28.497: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:19:28.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9775" for this suite.
Aug 11 08:19:34.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:19:34.796: INFO: namespace statefulset-9775 deletion completed in 6.284151891s

• [SLOW TEST:167.685 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:19:34.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 11 08:19:39.073: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:19:39.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7952" for this suite.
Aug 11 08:19:45.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:19:45.268: INFO: namespace container-runtime-7952 deletion completed in 6.099386191s

• [SLOW TEST:10.472 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:19:45.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:19:51.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9290" for this suite.
Aug 11 08:19:57.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:19:57.731: INFO: namespace namespaces-9290 deletion completed in 6.091242413s
STEP: Destroying namespace "nsdeletetest-2691" for this suite.
Aug 11 08:19:57.733: INFO: Namespace nsdeletetest-2691 was already deleted
STEP: Destroying namespace "nsdeletetest-5262" for this suite.
Aug 11 08:20:03.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:20:03.828: INFO: namespace nsdeletetest-5262 deletion completed in 6.094628331s

• [SLOW TEST:18.559 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:20:03.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:20:07.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7643" for this suite.
Aug 11 08:20:57.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:20:58.024: INFO: namespace kubelet-test-7643 deletion completed in 50.088598919s

• [SLOW TEST:54.196 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:20:58.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 11 08:20:58.109: INFO: Waiting up to 5m0s for pod "pod-8fdaffc1-7c08-4303-9a14-a58a52ed20e9" in namespace "emptydir-5429" to be "success or failure"
Aug 11 08:20:58.112: INFO: Pod "pod-8fdaffc1-7c08-4303-9a14-a58a52ed20e9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.316523ms
Aug 11 08:21:00.115: INFO: Pod "pod-8fdaffc1-7c08-4303-9a14-a58a52ed20e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00634954s
Aug 11 08:21:02.120: INFO: Pod "pod-8fdaffc1-7c08-4303-9a14-a58a52ed20e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010783121s
STEP: Saw pod success
Aug 11 08:21:02.120: INFO: Pod "pod-8fdaffc1-7c08-4303-9a14-a58a52ed20e9" satisfied condition "success or failure"
Aug 11 08:21:02.123: INFO: Trying to get logs from node iruya-worker pod pod-8fdaffc1-7c08-4303-9a14-a58a52ed20e9 container test-container: 
STEP: delete the pod
Aug 11 08:21:02.159: INFO: Waiting for pod pod-8fdaffc1-7c08-4303-9a14-a58a52ed20e9 to disappear
Aug 11 08:21:02.177: INFO: Pod pod-8fdaffc1-7c08-4303-9a14-a58a52ed20e9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:21:02.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5429" for this suite.
Aug 11 08:21:08.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:21:08.270: INFO: namespace emptydir-5429 deletion completed in 6.089745966s

• [SLOW TEST:10.245 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:21:08.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-2631, will wait for the garbage collector to delete the pods
Aug 11 08:21:14.423: INFO: Deleting Job.batch foo took: 6.966266ms
Aug 11 08:21:14.723: INFO: Terminating Job.batch foo pods took: 300.295325ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:21:56.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2631" for this suite.
Aug 11 08:22:02.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:22:02.424: INFO: namespace job-2631 deletion completed in 6.091962838s

• [SLOW TEST:54.154 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:22:02.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-4b66e353-d040-409d-aeb8-2c91ec93bff2
STEP: Creating a pod to test consume secrets
Aug 11 08:22:02.547: INFO: Waiting up to 5m0s for pod "pod-secrets-ce07b4eb-e63e-46a1-9e5a-39f43b235ca8" in namespace "secrets-7235" to be "success or failure"
Aug 11 08:22:02.556: INFO: Pod "pod-secrets-ce07b4eb-e63e-46a1-9e5a-39f43b235ca8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.134718ms
Aug 11 08:22:04.560: INFO: Pod "pod-secrets-ce07b4eb-e63e-46a1-9e5a-39f43b235ca8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013368306s
Aug 11 08:22:06.584: INFO: Pod "pod-secrets-ce07b4eb-e63e-46a1-9e5a-39f43b235ca8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03744449s
STEP: Saw pod success
Aug 11 08:22:06.584: INFO: Pod "pod-secrets-ce07b4eb-e63e-46a1-9e5a-39f43b235ca8" satisfied condition "success or failure"
Aug 11 08:22:06.587: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-ce07b4eb-e63e-46a1-9e5a-39f43b235ca8 container secret-volume-test: 
STEP: delete the pod
Aug 11 08:22:06.611: INFO: Waiting for pod pod-secrets-ce07b4eb-e63e-46a1-9e5a-39f43b235ca8 to disappear
Aug 11 08:22:06.622: INFO: Pod pod-secrets-ce07b4eb-e63e-46a1-9e5a-39f43b235ca8 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:22:06.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7235" for this suite.
Aug 11 08:22:12.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:22:12.732: INFO: namespace secrets-7235 deletion completed in 6.107080548s

• [SLOW TEST:10.308 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:22:12.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-c5b0290e-6125-4d97-95e8-7bf6e1d9f50f
STEP: Creating configMap with name cm-test-opt-upd-57fb1294-6483-4c0f-84d9-783422dbc132
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-c5b0290e-6125-4d97-95e8-7bf6e1d9f50f
STEP: Updating configmap cm-test-opt-upd-57fb1294-6483-4c0f-84d9-783422dbc132
STEP: Creating configMap with name cm-test-opt-create-2cf398d6-7e6b-4410-9ab0-d2e22165ff13
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:22:23.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4099" for this suite.
Aug 11 08:22:45.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:22:45.113: INFO: namespace projected-4099 deletion completed in 22.093657735s

• [SLOW TEST:32.381 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:22:45.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 11 08:22:45.177: INFO: Waiting up to 5m0s for pod "pod-fa4cf183-6f73-4ad4-9616-655381c34ee9" in namespace "emptydir-721" to be "success or failure"
Aug 11 08:22:45.189: INFO: Pod "pod-fa4cf183-6f73-4ad4-9616-655381c34ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.197172ms
Aug 11 08:22:47.193: INFO: Pod "pod-fa4cf183-6f73-4ad4-9616-655381c34ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015432089s
Aug 11 08:22:49.196: INFO: Pod "pod-fa4cf183-6f73-4ad4-9616-655381c34ee9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019009577s
STEP: Saw pod success
Aug 11 08:22:49.196: INFO: Pod "pod-fa4cf183-6f73-4ad4-9616-655381c34ee9" satisfied condition "success or failure"
Aug 11 08:22:49.199: INFO: Trying to get logs from node iruya-worker2 pod pod-fa4cf183-6f73-4ad4-9616-655381c34ee9 container test-container: 
STEP: delete the pod
Aug 11 08:22:49.217: INFO: Waiting for pod pod-fa4cf183-6f73-4ad4-9616-655381c34ee9 to disappear
Aug 11 08:22:49.241: INFO: Pod pod-fa4cf183-6f73-4ad4-9616-655381c34ee9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:22:49.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-721" for this suite.
Aug 11 08:22:55.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:22:55.347: INFO: namespace emptydir-721 deletion completed in 6.102518409s

• [SLOW TEST:10.234 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:22:55.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Aug 11 08:22:55.437: INFO: Waiting up to 5m0s for pod "client-containers-8db1d785-fc51-4f44-a681-7d0e22068f97" in namespace "containers-8686" to be "success or failure"
Aug 11 08:22:55.443: INFO: Pod "client-containers-8db1d785-fc51-4f44-a681-7d0e22068f97": Phase="Pending", Reason="", readiness=false. Elapsed: 5.795296ms
Aug 11 08:22:57.529: INFO: Pod "client-containers-8db1d785-fc51-4f44-a681-7d0e22068f97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091267028s
Aug 11 08:22:59.532: INFO: Pod "client-containers-8db1d785-fc51-4f44-a681-7d0e22068f97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094968122s
STEP: Saw pod success
Aug 11 08:22:59.533: INFO: Pod "client-containers-8db1d785-fc51-4f44-a681-7d0e22068f97" satisfied condition "success or failure"
Aug 11 08:22:59.535: INFO: Trying to get logs from node iruya-worker pod client-containers-8db1d785-fc51-4f44-a681-7d0e22068f97 container test-container: 
STEP: delete the pod
Aug 11 08:22:59.553: INFO: Waiting for pod client-containers-8db1d785-fc51-4f44-a681-7d0e22068f97 to disappear
Aug 11 08:22:59.557: INFO: Pod client-containers-8db1d785-fc51-4f44-a681-7d0e22068f97 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:22:59.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8686" for this suite.
Aug 11 08:23:05.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:23:05.648: INFO: namespace containers-8686 deletion completed in 6.087219809s

• [SLOW TEST:10.300 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:23:05.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 11 08:23:05.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3020'
Aug 11 08:23:09.344: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 11 08:23:09.344: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Aug 11 08:23:09.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3020'
Aug 11 08:23:09.480: INFO: stderr: ""
Aug 11 08:23:09.480: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:23:09.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3020" for this suite.
Aug 11 08:23:15.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:23:15.605: INFO: namespace kubectl-3020 deletion completed in 6.107543557s

• [SLOW TEST:9.957 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:23:15.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 11 08:23:15.692: INFO: Waiting up to 5m0s for pod "pod-c5de3de7-d5a5-4995-9031-05ea08748094" in namespace "emptydir-9712" to be "success or failure"
Aug 11 08:23:15.742: INFO: Pod "pod-c5de3de7-d5a5-4995-9031-05ea08748094": Phase="Pending", Reason="", readiness=false. Elapsed: 49.951044ms
Aug 11 08:23:17.746: INFO: Pod "pod-c5de3de7-d5a5-4995-9031-05ea08748094": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05420486s
Aug 11 08:23:19.750: INFO: Pod "pod-c5de3de7-d5a5-4995-9031-05ea08748094": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058513642s
STEP: Saw pod success
Aug 11 08:23:19.750: INFO: Pod "pod-c5de3de7-d5a5-4995-9031-05ea08748094" satisfied condition "success or failure"
Aug 11 08:23:19.754: INFO: Trying to get logs from node iruya-worker2 pod pod-c5de3de7-d5a5-4995-9031-05ea08748094 container test-container: 
STEP: delete the pod
Aug 11 08:23:19.775: INFO: Waiting for pod pod-c5de3de7-d5a5-4995-9031-05ea08748094 to disappear
Aug 11 08:23:19.799: INFO: Pod pod-c5de3de7-d5a5-4995-9031-05ea08748094 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:23:19.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9712" for this suite.
Aug 11 08:23:25.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:23:25.887: INFO: namespace emptydir-9712 deletion completed in 6.083951958s

• [SLOW TEST:10.282 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:23:25.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-c241f243-4edc-45ca-9b5d-7a6c8a37f725
STEP: Creating a pod to test consume secrets
Aug 11 08:23:25.979: INFO: Waiting up to 5m0s for pod "pod-secrets-2a4bac7c-f085-433e-a2c0-ef7e50bd65c7" in namespace "secrets-9748" to be "success or failure"
Aug 11 08:23:25.983: INFO: Pod "pod-secrets-2a4bac7c-f085-433e-a2c0-ef7e50bd65c7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.776301ms
Aug 11 08:23:27.987: INFO: Pod "pod-secrets-2a4bac7c-f085-433e-a2c0-ef7e50bd65c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008078442s
Aug 11 08:23:29.992: INFO: Pod "pod-secrets-2a4bac7c-f085-433e-a2c0-ef7e50bd65c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012429648s
STEP: Saw pod success
Aug 11 08:23:29.992: INFO: Pod "pod-secrets-2a4bac7c-f085-433e-a2c0-ef7e50bd65c7" satisfied condition "success or failure"
Aug 11 08:23:29.995: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-2a4bac7c-f085-433e-a2c0-ef7e50bd65c7 container secret-env-test: 
STEP: delete the pod
Aug 11 08:23:30.139: INFO: Waiting for pod pod-secrets-2a4bac7c-f085-433e-a2c0-ef7e50bd65c7 to disappear
Aug 11 08:23:30.180: INFO: Pod pod-secrets-2a4bac7c-f085-433e-a2c0-ef7e50bd65c7 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:23:30.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9748" for this suite.
Aug 11 08:23:36.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:23:36.279: INFO: namespace secrets-9748 deletion completed in 6.095445035s

• [SLOW TEST:10.391 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:23:36.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 11 08:23:36.405: INFO: Waiting up to 5m0s for pod "downward-api-f9b4bf0f-4930-4cbe-b0bf-0e2f28facb42" in namespace "downward-api-3016" to be "success or failure"
Aug 11 08:23:36.408: INFO: Pod "downward-api-f9b4bf0f-4930-4cbe-b0bf-0e2f28facb42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.861817ms
Aug 11 08:23:38.412: INFO: Pod "downward-api-f9b4bf0f-4930-4cbe-b0bf-0e2f28facb42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006792565s
Aug 11 08:23:40.417: INFO: Pod "downward-api-f9b4bf0f-4930-4cbe-b0bf-0e2f28facb42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011146644s
STEP: Saw pod success
Aug 11 08:23:40.417: INFO: Pod "downward-api-f9b4bf0f-4930-4cbe-b0bf-0e2f28facb42" satisfied condition "success or failure"
Aug 11 08:23:40.420: INFO: Trying to get logs from node iruya-worker pod downward-api-f9b4bf0f-4930-4cbe-b0bf-0e2f28facb42 container dapi-container: 
STEP: delete the pod
Aug 11 08:23:40.440: INFO: Waiting for pod downward-api-f9b4bf0f-4930-4cbe-b0bf-0e2f28facb42 to disappear
Aug 11 08:23:40.450: INFO: Pod downward-api-f9b4bf0f-4930-4cbe-b0bf-0e2f28facb42 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:23:40.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3016" for this suite.
Aug 11 08:23:46.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:23:46.634: INFO: namespace downward-api-3016 deletion completed in 6.180432641s

• [SLOW TEST:10.354 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:23:46.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 11 08:23:46.698: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9822ae00-b608-4bc2-b514-02c29a7e4e08" in namespace "projected-6509" to be "success or failure"
Aug 11 08:23:46.726: INFO: Pod "downwardapi-volume-9822ae00-b608-4bc2-b514-02c29a7e4e08": Phase="Pending", Reason="", readiness=false. Elapsed: 27.466956ms
Aug 11 08:23:48.785: INFO: Pod "downwardapi-volume-9822ae00-b608-4bc2-b514-02c29a7e4e08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086732048s
Aug 11 08:23:50.788: INFO: Pod "downwardapi-volume-9822ae00-b608-4bc2-b514-02c29a7e4e08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089973462s
STEP: Saw pod success
Aug 11 08:23:50.788: INFO: Pod "downwardapi-volume-9822ae00-b608-4bc2-b514-02c29a7e4e08" satisfied condition "success or failure"
Aug 11 08:23:50.791: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9822ae00-b608-4bc2-b514-02c29a7e4e08 container client-container: 
STEP: delete the pod
Aug 11 08:23:50.848: INFO: Waiting for pod downwardapi-volume-9822ae00-b608-4bc2-b514-02c29a7e4e08 to disappear
Aug 11 08:23:50.866: INFO: Pod downwardapi-volume-9822ae00-b608-4bc2-b514-02c29a7e4e08 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:23:50.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6509" for this suite.
Aug 11 08:23:56.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:23:57.009: INFO: namespace projected-6509 deletion completed in 6.138672366s

• [SLOW TEST:10.374 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:23:57.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 11 08:23:57.109: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87d9d16e-b541-46c7-8dd9-8ff8ecd46116" in namespace "downward-api-9953" to be "success or failure"
Aug 11 08:23:57.126: INFO: Pod "downwardapi-volume-87d9d16e-b541-46c7-8dd9-8ff8ecd46116": Phase="Pending", Reason="", readiness=false. Elapsed: 16.347002ms
Aug 11 08:23:59.130: INFO: Pod "downwardapi-volume-87d9d16e-b541-46c7-8dd9-8ff8ecd46116": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020845263s
Aug 11 08:24:01.134: INFO: Pod "downwardapi-volume-87d9d16e-b541-46c7-8dd9-8ff8ecd46116": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024524975s
STEP: Saw pod success
Aug 11 08:24:01.134: INFO: Pod "downwardapi-volume-87d9d16e-b541-46c7-8dd9-8ff8ecd46116" satisfied condition "success or failure"
Aug 11 08:24:01.137: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-87d9d16e-b541-46c7-8dd9-8ff8ecd46116 container client-container: 
STEP: delete the pod
Aug 11 08:24:01.277: INFO: Waiting for pod downwardapi-volume-87d9d16e-b541-46c7-8dd9-8ff8ecd46116 to disappear
Aug 11 08:24:01.303: INFO: Pod downwardapi-volume-87d9d16e-b541-46c7-8dd9-8ff8ecd46116 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:24:01.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9953" for this suite.
Aug 11 08:24:07.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:24:07.390: INFO: namespace downward-api-9953 deletion completed in 6.079915799s

• [SLOW TEST:10.381 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:24:07.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-e1e3f3a0-c2b2-417d-900c-77aff1a4cdb2
STEP: Creating configMap with name cm-test-opt-upd-dfa3c58b-c3ed-431d-8ca7-e2bcd850a9d5
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-e1e3f3a0-c2b2-417d-900c-77aff1a4cdb2
STEP: Updating configmap cm-test-opt-upd-dfa3c58b-c3ed-431d-8ca7-e2bcd850a9d5
STEP: Creating configMap with name cm-test-opt-create-890a0308-a1aa-47d9-ac9c-edea795142b3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:25:19.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5404" for this suite.
Aug 11 08:25:41.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:25:41.905: INFO: namespace configmap-5404 deletion completed in 22.093374561s

• [SLOW TEST:94.514 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:25:41.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 11 08:25:45.991: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-94464bc1-1b1e-4fe1-a15d-8e518d70d890,GenerateName:,Namespace:events-1301,SelfLink:/api/v1/namespaces/events-1301/pods/send-events-94464bc1-1b1e-4fe1-a15d-8e518d70d890,UID:a1e9c7f1-e01c-4072-9745-378a8ffb6839,ResourceVersion:4152952,Generation:0,CreationTimestamp:2020-08-11 08:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 959084214,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t5zwz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t5zwz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-t5zwz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017c5eb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017c5ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:25:42 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:25:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:25:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:25:41 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.105,StartTime:2020-08-11 08:25:42 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-08-11 08:25:44 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://089d68fdaba46c3b502b9e92e00dd23fead55a4dca69896e7b704be0654a5532}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Aug 11 08:25:47.996: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 11 08:25:50.001: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:25:50.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1301" for this suite.
Aug 11 08:26:28.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:26:28.170: INFO: namespace events-1301 deletion completed in 38.152093589s

• [SLOW TEST:46.264 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:26:28.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Aug 11 08:26:28.267: INFO: Waiting up to 5m0s for pod "var-expansion-3760f894-d316-4262-bbe1-b2eece575c5e" in namespace "var-expansion-6474" to be "success or failure"
Aug 11 08:26:28.295: INFO: Pod "var-expansion-3760f894-d316-4262-bbe1-b2eece575c5e": Phase="Pending", Reason="", readiness=false. Elapsed: 28.327372ms
Aug 11 08:26:30.299: INFO: Pod "var-expansion-3760f894-d316-4262-bbe1-b2eece575c5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031685023s
Aug 11 08:26:32.303: INFO: Pod "var-expansion-3760f894-d316-4262-bbe1-b2eece575c5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035804064s
STEP: Saw pod success
Aug 11 08:26:32.303: INFO: Pod "var-expansion-3760f894-d316-4262-bbe1-b2eece575c5e" satisfied condition "success or failure"
Aug 11 08:26:32.306: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-3760f894-d316-4262-bbe1-b2eece575c5e container dapi-container: 
STEP: delete the pod
Aug 11 08:26:32.329: INFO: Waiting for pod var-expansion-3760f894-d316-4262-bbe1-b2eece575c5e to disappear
Aug 11 08:26:32.348: INFO: Pod var-expansion-3760f894-d316-4262-bbe1-b2eece575c5e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:26:32.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6474" for this suite.
Aug 11 08:26:38.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:26:38.538: INFO: namespace var-expansion-6474 deletion completed in 6.185946534s

• [SLOW TEST:10.367 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:26:38.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 08:26:38.574: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 11 08:26:40.632: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:26:41.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5422" for this suite.
Aug 11 08:26:50.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:26:50.118: INFO: namespace replication-controller-5422 deletion completed in 8.166034621s

• [SLOW TEST:11.580 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:26:50.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-ff8f059e-196a-41a2-aa47-f87f3c51f3a6
STEP: Creating a pod to test consume configMaps
Aug 11 08:26:50.236: INFO: Waiting up to 5m0s for pod "pod-configmaps-ca905b0c-0a4e-4c1e-99ee-a120d53e7390" in namespace "configmap-9275" to be "success or failure"
Aug 11 08:26:50.240: INFO: Pod "pod-configmaps-ca905b0c-0a4e-4c1e-99ee-a120d53e7390": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081561ms
Aug 11 08:26:52.244: INFO: Pod "pod-configmaps-ca905b0c-0a4e-4c1e-99ee-a120d53e7390": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008356636s
Aug 11 08:26:54.248: INFO: Pod "pod-configmaps-ca905b0c-0a4e-4c1e-99ee-a120d53e7390": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011937088s
STEP: Saw pod success
Aug 11 08:26:54.248: INFO: Pod "pod-configmaps-ca905b0c-0a4e-4c1e-99ee-a120d53e7390" satisfied condition "success or failure"
Aug 11 08:26:54.250: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-ca905b0c-0a4e-4c1e-99ee-a120d53e7390 container configmap-volume-test: 
STEP: delete the pod
Aug 11 08:26:54.402: INFO: Waiting for pod pod-configmaps-ca905b0c-0a4e-4c1e-99ee-a120d53e7390 to disappear
Aug 11 08:26:54.533: INFO: Pod pod-configmaps-ca905b0c-0a4e-4c1e-99ee-a120d53e7390 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:26:54.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9275" for this suite.
Aug 11 08:27:00.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:27:00.650: INFO: namespace configmap-9275 deletion completed in 6.112534186s

• [SLOW TEST:10.531 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:27:00.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 08:27:00.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:27:04.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3046" for this suite.
Aug 11 08:27:46.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:27:46.990: INFO: namespace pods-3046 deletion completed in 42.091710304s

• [SLOW TEST:46.340 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:27:46.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:28:15.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4636" for this suite.
Aug 11 08:28:21.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:28:21.348: INFO: namespace namespaces-4636 deletion completed in 6.09185133s
STEP: Destroying namespace "nsdeletetest-5995" for this suite.
Aug 11 08:28:21.350: INFO: Namespace nsdeletetest-5995 was already deleted
STEP: Destroying namespace "nsdeletetest-7274" for this suite.
Aug 11 08:28:27.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:28:27.474: INFO: namespace nsdeletetest-7274 deletion completed in 6.124025614s

• [SLOW TEST:40.483 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:28:27.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 11 08:28:34.624: INFO: 0 pods remaining
Aug 11 08:28:34.624: INFO: 0 pods has nil DeletionTimestamp
Aug 11 08:28:34.624: INFO: 
Aug 11 08:28:36.074: INFO: 0 pods remaining
Aug 11 08:28:36.074: INFO: 0 pods has nil DeletionTimestamp
Aug 11 08:28:36.074: INFO: 
Aug 11 08:28:37.121: INFO: 0 pods remaining
Aug 11 08:28:37.121: INFO: 0 pods has nil DeletionTimestamp
Aug 11 08:28:37.121: INFO: 
STEP: Gathering metrics
W0811 08:28:38.048489       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 11 08:28:38.048: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:28:38.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1908" for this suite.
Aug 11 08:28:44.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:28:44.466: INFO: namespace gc-1908 deletion completed in 6.3379285s

• [SLOW TEST:16.992 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:28:44.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 11 08:28:44.569: INFO: Waiting up to 5m0s for pod "pod-9ceb4c8d-4b38-41ac-b762-756ed5e3b123" in namespace "emptydir-7555" to be "success or failure"
Aug 11 08:28:44.578: INFO: Pod "pod-9ceb4c8d-4b38-41ac-b762-756ed5e3b123": Phase="Pending", Reason="", readiness=false. Elapsed: 9.0669ms
Aug 11 08:28:46.582: INFO: Pod "pod-9ceb4c8d-4b38-41ac-b762-756ed5e3b123": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012838239s
Aug 11 08:28:48.585: INFO: Pod "pod-9ceb4c8d-4b38-41ac-b762-756ed5e3b123": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016242035s
STEP: Saw pod success
Aug 11 08:28:48.585: INFO: Pod "pod-9ceb4c8d-4b38-41ac-b762-756ed5e3b123" satisfied condition "success or failure"
Aug 11 08:28:48.587: INFO: Trying to get logs from node iruya-worker2 pod pod-9ceb4c8d-4b38-41ac-b762-756ed5e3b123 container test-container: 
STEP: delete the pod
Aug 11 08:28:48.738: INFO: Waiting for pod pod-9ceb4c8d-4b38-41ac-b762-756ed5e3b123 to disappear
Aug 11 08:28:48.812: INFO: Pod pod-9ceb4c8d-4b38-41ac-b762-756ed5e3b123 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:28:48.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7555" for this suite.
Aug 11 08:28:54.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:28:54.995: INFO: namespace emptydir-7555 deletion completed in 6.179457928s

• [SLOW TEST:10.528 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:28:54.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 11 08:28:59.639: INFO: Successfully updated pod "labelsupdate01704091-36e7-4ba5-a0b4-d133bdd3afe4"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:29:03.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9504" for this suite.
Aug 11 08:29:25.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:29:25.766: INFO: namespace projected-9504 deletion completed in 22.094942909s

• [SLOW TEST:30.771 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:29:25.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 11 08:29:25.822: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3009,SelfLink:/api/v1/namespaces/watch-3009/configmaps/e2e-watch-test-configmap-a,UID:de8bff23-d8e8-4ead-93a5-9c9827628a83,ResourceVersion:4153795,Generation:0,CreationTimestamp:2020-08-11 08:29:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 11 08:29:25.822: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3009,SelfLink:/api/v1/namespaces/watch-3009/configmaps/e2e-watch-test-configmap-a,UID:de8bff23-d8e8-4ead-93a5-9c9827628a83,ResourceVersion:4153795,Generation:0,CreationTimestamp:2020-08-11 08:29:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 11 08:29:35.829: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3009,SelfLink:/api/v1/namespaces/watch-3009/configmaps/e2e-watch-test-configmap-a,UID:de8bff23-d8e8-4ead-93a5-9c9827628a83,ResourceVersion:4153816,Generation:0,CreationTimestamp:2020-08-11 08:29:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 11 08:29:35.829: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3009,SelfLink:/api/v1/namespaces/watch-3009/configmaps/e2e-watch-test-configmap-a,UID:de8bff23-d8e8-4ead-93a5-9c9827628a83,ResourceVersion:4153816,Generation:0,CreationTimestamp:2020-08-11 08:29:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 11 08:29:45.837: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3009,SelfLink:/api/v1/namespaces/watch-3009/configmaps/e2e-watch-test-configmap-a,UID:de8bff23-d8e8-4ead-93a5-9c9827628a83,ResourceVersion:4153837,Generation:0,CreationTimestamp:2020-08-11 08:29:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 11 08:29:45.838: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3009,SelfLink:/api/v1/namespaces/watch-3009/configmaps/e2e-watch-test-configmap-a,UID:de8bff23-d8e8-4ead-93a5-9c9827628a83,ResourceVersion:4153837,Generation:0,CreationTimestamp:2020-08-11 08:29:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 11 08:29:55.844: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3009,SelfLink:/api/v1/namespaces/watch-3009/configmaps/e2e-watch-test-configmap-a,UID:de8bff23-d8e8-4ead-93a5-9c9827628a83,ResourceVersion:4153857,Generation:0,CreationTimestamp:2020-08-11 08:29:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 11 08:29:55.845: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3009,SelfLink:/api/v1/namespaces/watch-3009/configmaps/e2e-watch-test-configmap-a,UID:de8bff23-d8e8-4ead-93a5-9c9827628a83,ResourceVersion:4153857,Generation:0,CreationTimestamp:2020-08-11 08:29:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 11 08:30:05.852: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3009,SelfLink:/api/v1/namespaces/watch-3009/configmaps/e2e-watch-test-configmap-b,UID:d182ae5d-a0a8-4cec-b5cc-0530aad9ea54,ResourceVersion:4153877,Generation:0,CreationTimestamp:2020-08-11 08:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 11 08:30:05.852: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3009,SelfLink:/api/v1/namespaces/watch-3009/configmaps/e2e-watch-test-configmap-b,UID:d182ae5d-a0a8-4cec-b5cc-0530aad9ea54,ResourceVersion:4153877,Generation:0,CreationTimestamp:2020-08-11 08:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 11 08:30:15.858: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3009,SelfLink:/api/v1/namespaces/watch-3009/configmaps/e2e-watch-test-configmap-b,UID:d182ae5d-a0a8-4cec-b5cc-0530aad9ea54,ResourceVersion:4153898,Generation:0,CreationTimestamp:2020-08-11 08:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 11 08:30:15.858: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3009,SelfLink:/api/v1/namespaces/watch-3009/configmaps/e2e-watch-test-configmap-b,UID:d182ae5d-a0a8-4cec-b5cc-0530aad9ea54,ResourceVersion:4153898,Generation:0,CreationTimestamp:2020-08-11 08:30:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:30:25.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3009" for this suite.
Aug 11 08:30:31.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:30:31.954: INFO: namespace watch-3009 deletion completed in 6.090022555s

• [SLOW TEST:66.187 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:30:31.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-5070
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5070 to expose endpoints map[]
Aug 11 08:30:32.083: INFO: Get endpoints failed (53.304858ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Aug 11 08:30:33.086: INFO: successfully validated that service multi-endpoint-test in namespace services-5070 exposes endpoints map[] (1.056523732s elapsed)
STEP: Creating pod pod1 in namespace services-5070
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5070 to expose endpoints map[pod1:[100]]
Aug 11 08:30:37.130: INFO: successfully validated that service multi-endpoint-test in namespace services-5070 exposes endpoints map[pod1:[100]] (4.037559479s elapsed)
STEP: Creating pod pod2 in namespace services-5070
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5070 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 11 08:30:40.188: INFO: successfully validated that service multi-endpoint-test in namespace services-5070 exposes endpoints map[pod1:[100] pod2:[101]] (3.055087434s elapsed)
STEP: Deleting pod pod1 in namespace services-5070
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5070 to expose endpoints map[pod2:[101]]
Aug 11 08:30:40.223: INFO: successfully validated that service multi-endpoint-test in namespace services-5070 exposes endpoints map[pod2:[101]] (24.21875ms elapsed)
STEP: Deleting pod pod2 in namespace services-5070
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5070 to expose endpoints map[]
Aug 11 08:30:41.275: INFO: successfully validated that service multi-endpoint-test in namespace services-5070 exposes endpoints map[] (1.046988398s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:30:41.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5070" for this suite.
Aug 11 08:30:47.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:30:47.502: INFO: namespace services-5070 deletion completed in 6.154304099s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:15.547 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:30:47.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 08:30:47.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-668'
Aug 11 08:30:47.923: INFO: stderr: ""
Aug 11 08:30:47.923: INFO: stdout: "replicationcontroller/redis-master created\n"
Aug 11 08:30:47.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-668'
Aug 11 08:30:48.225: INFO: stderr: ""
Aug 11 08:30:48.225: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 11 08:30:49.245: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 08:30:49.245: INFO: Found 0 / 1
Aug 11 08:30:50.270: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 08:30:50.270: INFO: Found 0 / 1
Aug 11 08:30:51.231: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 08:30:51.231: INFO: Found 0 / 1
Aug 11 08:30:52.230: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 08:30:52.230: INFO: Found 1 / 1
Aug 11 08:30:52.230: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 11 08:30:52.233: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 08:30:52.233: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 11 08:30:52.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-xdwgs --namespace=kubectl-668'
Aug 11 08:30:52.346: INFO: stderr: ""
Aug 11 08:30:52.346: INFO: stdout: "Name:           redis-master-xdwgs\nNamespace:      kubectl-668\nPriority:       0\nNode:           iruya-worker2/172.18.0.7\nStart Time:     Tue, 11 Aug 2020 08:30:47 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.244.2.35\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://35daf850e19257084335052fd14271137d4ffaa05451f0e08141995377e3fd90\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 11 Aug 2020 08:30:50 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-j4nq9 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-j4nq9:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-j4nq9\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                    Message\n  ----    ------     ----  ----                    -------\n  Normal  Scheduled  5s    default-scheduler       Successfully assigned kubectl-668/redis-master-xdwgs to iruya-worker2\n  Normal  Pulled     3s    kubelet, iruya-worker2  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-worker2  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-worker2  Started container redis-master\n"
Aug 11 08:30:52.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-668'
Aug 11 08:30:52.463: INFO: stderr: ""
Aug 11 08:30:52.463: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-668\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: redis-master-xdwgs\n"
Aug 11 08:30:52.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-668'
Aug 11 08:30:52.585: INFO: stderr: ""
Aug 11 08:30:52.585: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-668\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.103.124.152\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.2.35:6379\nSession Affinity:  None\nEvents:            \n"
Aug 11 08:30:52.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane'
Aug 11 08:30:52.721: INFO: stderr: ""
Aug 11 08:30:52.722: INFO: stdout: "Name:               iruya-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 19 Jul 2020 21:15:33 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Tue, 11 Aug 2020 08:30:41 +0000   Sun, 19 Jul 2020 21:15:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Tue, 11 Aug 2020 08:30:41 +0000   Sun, 19 Jul 2020 21:15:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Tue, 11 Aug 2020 08:30:41 +0000   Sun, 19 Jul 2020 21:15:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Tue, 11 Aug 2020 08:30:41 +0000   Sun, 19 Jul 2020 21:16:03 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.9\n  Hostname:    iruya-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nSystem Info:\n Machine ID:                 ca83ac9a93d54502bb9afb972c3f1f0b\n System UUID:                1d4ac873-683f-4805-8579-15bbb4e4df77\n Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version:             4.15.0-109-generic\n OS Image:                   Ubuntu 20.04 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version:            v1.15.12\n Kube-Proxy Version:         v1.15.12\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-5d4dd4b4db-clz9n                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     22d\n  kube-system                coredns-5d4dd4b4db-w42x4                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     22d\n  kube-system                etcd-iruya-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         22d\n  kube-system                kindnet-xbjsm                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      22d\n  kube-system                kube-apiserver-iruya-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         22d\n  kube-system                kube-controller-manager-iruya-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         22d\n  kube-system                kube-proxy-nwhvb                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         22d\n  kube-system                kube-scheduler-iruya-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         22d\n  local-path-storage         local-path-provisioner-668779bd7-sf66r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 11 08:30:52.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-668'
Aug 11 08:30:52.830: INFO: stderr: ""
Aug 11 08:30:52.830: INFO: stdout: "Name:         kubectl-668\nLabels:       e2e-framework=kubectl\n              e2e-run=4f8c74a2-8748-43fc-a184-4dc31b6847fb\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:30:52.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-668" for this suite.
Aug 11 08:31:14.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:31:14.944: INFO: namespace kubectl-668 deletion completed in 22.110137908s

• [SLOW TEST:27.442 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:31:14.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 11 08:31:15.033: INFO: Waiting up to 5m0s for pod "pod-49138671-b2af-41d0-b8e8-6c7b58999ec4" in namespace "emptydir-6549" to be "success or failure"
Aug 11 08:31:15.036: INFO: Pod "pod-49138671-b2af-41d0-b8e8-6c7b58999ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.656302ms
Aug 11 08:31:17.040: INFO: Pod "pod-49138671-b2af-41d0-b8e8-6c7b58999ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006573944s
Aug 11 08:31:19.044: INFO: Pod "pod-49138671-b2af-41d0-b8e8-6c7b58999ec4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010598481s
STEP: Saw pod success
Aug 11 08:31:19.044: INFO: Pod "pod-49138671-b2af-41d0-b8e8-6c7b58999ec4" satisfied condition "success or failure"
Aug 11 08:31:19.047: INFO: Trying to get logs from node iruya-worker pod pod-49138671-b2af-41d0-b8e8-6c7b58999ec4 container test-container: 
STEP: delete the pod
Aug 11 08:31:19.127: INFO: Waiting for pod pod-49138671-b2af-41d0-b8e8-6c7b58999ec4 to disappear
Aug 11 08:31:19.134: INFO: Pod pod-49138671-b2af-41d0-b8e8-6c7b58999ec4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:31:19.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6549" for this suite.
Aug 11 08:31:25.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:31:25.223: INFO: namespace emptydir-6549 deletion completed in 6.086243057s

• [SLOW TEST:10.279 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:31:25.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-a24bd9ad-8148-4274-8ffb-9d2dcfe27473
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:31:25.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1023" for this suite.
Aug 11 08:31:31.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:31:31.399: INFO: namespace configmap-1023 deletion completed in 6.080470922s

• [SLOW TEST:6.175 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:31:31.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-4cc49692-4be8-4f7c-9c8e-670b94678462
STEP: Creating a pod to test consume secrets
Aug 11 08:31:31.565: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-11a60c78-de3f-4c9a-b6dd-ee7f903c26f5" in namespace "projected-4523" to be "success or failure"
Aug 11 08:31:31.578: INFO: Pod "pod-projected-secrets-11a60c78-de3f-4c9a-b6dd-ee7f903c26f5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.482377ms
Aug 11 08:31:33.582: INFO: Pod "pod-projected-secrets-11a60c78-de3f-4c9a-b6dd-ee7f903c26f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01666503s
Aug 11 08:31:35.585: INFO: Pod "pod-projected-secrets-11a60c78-de3f-4c9a-b6dd-ee7f903c26f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020486834s
STEP: Saw pod success
Aug 11 08:31:35.586: INFO: Pod "pod-projected-secrets-11a60c78-de3f-4c9a-b6dd-ee7f903c26f5" satisfied condition "success or failure"
Aug 11 08:31:35.589: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-11a60c78-de3f-4c9a-b6dd-ee7f903c26f5 container projected-secret-volume-test: 
STEP: delete the pod
Aug 11 08:31:35.648: INFO: Waiting for pod pod-projected-secrets-11a60c78-de3f-4c9a-b6dd-ee7f903c26f5 to disappear
Aug 11 08:31:35.705: INFO: Pod pod-projected-secrets-11a60c78-de3f-4c9a-b6dd-ee7f903c26f5 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:31:35.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4523" for this suite.
Aug 11 08:31:41.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:31:41.795: INFO: namespace projected-4523 deletion completed in 6.0855186s

• [SLOW TEST:10.395 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:31:41.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-ac94d36f-0450-4868-aa40-f45898e10127
STEP: Creating a pod to test consume configMaps
Aug 11 08:31:41.944: INFO: Waiting up to 5m0s for pod "pod-configmaps-c6b2c4ed-d26c-46b4-84a0-cf16cc44ac0d" in namespace "configmap-8775" to be "success or failure"
Aug 11 08:31:41.971: INFO: Pod "pod-configmaps-c6b2c4ed-d26c-46b4-84a0-cf16cc44ac0d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.857955ms
Aug 11 08:31:44.043: INFO: Pod "pod-configmaps-c6b2c4ed-d26c-46b4-84a0-cf16cc44ac0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098955184s
Aug 11 08:31:46.047: INFO: Pod "pod-configmaps-c6b2c4ed-d26c-46b4-84a0-cf16cc44ac0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10339931s
STEP: Saw pod success
Aug 11 08:31:46.047: INFO: Pod "pod-configmaps-c6b2c4ed-d26c-46b4-84a0-cf16cc44ac0d" satisfied condition "success or failure"
Aug 11 08:31:46.051: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-c6b2c4ed-d26c-46b4-84a0-cf16cc44ac0d container configmap-volume-test: 
STEP: delete the pod
Aug 11 08:31:46.128: INFO: Waiting for pod pod-configmaps-c6b2c4ed-d26c-46b4-84a0-cf16cc44ac0d to disappear
Aug 11 08:31:46.139: INFO: Pod pod-configmaps-c6b2c4ed-d26c-46b4-84a0-cf16cc44ac0d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:31:46.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8775" for this suite.
Aug 11 08:31:52.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:31:52.242: INFO: namespace configmap-8775 deletion completed in 6.09946701s

• [SLOW TEST:10.446 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:31:52.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Aug 11 08:31:52.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-91'
Aug 11 08:31:52.594: INFO: stderr: ""
Aug 11 08:31:52.594: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 11 08:31:52.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-91'
Aug 11 08:31:52.765: INFO: stderr: ""
Aug 11 08:31:52.766: INFO: stdout: "update-demo-nautilus-kfhnj update-demo-nautilus-lmfjv "
Aug 11 08:31:52.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kfhnj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-91'
Aug 11 08:31:52.884: INFO: stderr: ""
Aug 11 08:31:52.884: INFO: stdout: ""
Aug 11 08:31:52.884: INFO: update-demo-nautilus-kfhnj is created but not running
Aug 11 08:31:57.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-91'
Aug 11 08:31:57.985: INFO: stderr: ""
Aug 11 08:31:57.985: INFO: stdout: "update-demo-nautilus-kfhnj update-demo-nautilus-lmfjv "
Aug 11 08:31:57.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kfhnj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-91'
Aug 11 08:31:58.081: INFO: stderr: ""
Aug 11 08:31:58.081: INFO: stdout: "true"
Aug 11 08:31:58.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kfhnj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-91'
Aug 11 08:31:58.181: INFO: stderr: ""
Aug 11 08:31:58.181: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 08:31:58.181: INFO: validating pod update-demo-nautilus-kfhnj
Aug 11 08:31:58.185: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 08:31:58.185: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 08:31:58.185: INFO: update-demo-nautilus-kfhnj is verified up and running
Aug 11 08:31:58.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lmfjv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-91'
Aug 11 08:31:58.280: INFO: stderr: ""
Aug 11 08:31:58.280: INFO: stdout: "true"
Aug 11 08:31:58.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lmfjv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-91'
Aug 11 08:31:58.379: INFO: stderr: ""
Aug 11 08:31:58.379: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 08:31:58.379: INFO: validating pod update-demo-nautilus-lmfjv
Aug 11 08:31:58.383: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 08:31:58.383: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 08:31:58.383: INFO: update-demo-nautilus-lmfjv is verified up and running
STEP: using delete to clean up resources
Aug 11 08:31:58.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-91'
Aug 11 08:31:58.487: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 08:31:58.487: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 11 08:31:58.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-91'
Aug 11 08:31:58.576: INFO: stderr: "No resources found.\n"
Aug 11 08:31:58.576: INFO: stdout: ""
Aug 11 08:31:58.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-91 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 11 08:31:58.669: INFO: stderr: ""
Aug 11 08:31:58.669: INFO: stdout: "update-demo-nautilus-kfhnj\nupdate-demo-nautilus-lmfjv\n"
Aug 11 08:31:59.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-91'
Aug 11 08:31:59.271: INFO: stderr: "No resources found.\n"
Aug 11 08:31:59.271: INFO: stdout: ""
Aug 11 08:31:59.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-91 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 11 08:31:59.364: INFO: stderr: ""
Aug 11 08:31:59.364: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:31:59.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-91" for this suite.
Aug 11 08:32:21.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:32:21.514: INFO: namespace kubectl-91 deletion completed in 22.146269581s

• [SLOW TEST:29.272 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:32:21.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 11 08:32:21.579: INFO: Waiting up to 5m0s for pod "downward-api-810388d6-9224-44f1-91d8-7fd9ae55fa72" in namespace "downward-api-6195" to be "success or failure"
Aug 11 08:32:21.598: INFO: Pod "downward-api-810388d6-9224-44f1-91d8-7fd9ae55fa72": Phase="Pending", Reason="", readiness=false. Elapsed: 19.127543ms
Aug 11 08:32:23.602: INFO: Pod "downward-api-810388d6-9224-44f1-91d8-7fd9ae55fa72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023271749s
Aug 11 08:32:25.607: INFO: Pod "downward-api-810388d6-9224-44f1-91d8-7fd9ae55fa72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027674436s
STEP: Saw pod success
Aug 11 08:32:25.607: INFO: Pod "downward-api-810388d6-9224-44f1-91d8-7fd9ae55fa72" satisfied condition "success or failure"
Aug 11 08:32:25.610: INFO: Trying to get logs from node iruya-worker2 pod downward-api-810388d6-9224-44f1-91d8-7fd9ae55fa72 container dapi-container: 
STEP: delete the pod
Aug 11 08:32:25.639: INFO: Waiting for pod downward-api-810388d6-9224-44f1-91d8-7fd9ae55fa72 to disappear
Aug 11 08:32:25.649: INFO: Pod downward-api-810388d6-9224-44f1-91d8-7fd9ae55fa72 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:32:25.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6195" for this suite.
Aug 11 08:32:31.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:32:31.753: INFO: namespace downward-api-6195 deletion completed in 6.100772989s

• [SLOW TEST:10.239 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:32:31.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 08:32:35.969: INFO: Waiting up to 5m0s for pod "client-envvars-f309cd45-7a23-43c8-afe1-50246355c14f" in namespace "pods-9918" to be "success or failure"
Aug 11 08:32:35.974: INFO: Pod "client-envvars-f309cd45-7a23-43c8-afe1-50246355c14f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.238012ms
Aug 11 08:32:38.005: INFO: Pod "client-envvars-f309cd45-7a23-43c8-afe1-50246355c14f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035664693s
Aug 11 08:32:40.062: INFO: Pod "client-envvars-f309cd45-7a23-43c8-afe1-50246355c14f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092366678s
STEP: Saw pod success
Aug 11 08:32:40.062: INFO: Pod "client-envvars-f309cd45-7a23-43c8-afe1-50246355c14f" satisfied condition "success or failure"
Aug 11 08:32:40.065: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-f309cd45-7a23-43c8-afe1-50246355c14f container env3cont: 
STEP: delete the pod
Aug 11 08:32:40.083: INFO: Waiting for pod client-envvars-f309cd45-7a23-43c8-afe1-50246355c14f to disappear
Aug 11 08:32:40.088: INFO: Pod client-envvars-f309cd45-7a23-43c8-afe1-50246355c14f no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:32:40.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9918" for this suite.
Aug 11 08:33:30.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:33:30.182: INFO: namespace pods-9918 deletion completed in 50.090015659s

• [SLOW TEST:58.429 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:33:30.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 11 08:33:30.251: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:33:37.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8797" for this suite.
Aug 11 08:33:43.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:33:44.033: INFO: namespace init-container-8797 deletion completed in 6.092197666s

• [SLOW TEST:13.850 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:33:44.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 08:33:44.109: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 11 08:33:58.489: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 08:33:58.497: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 08:34:00.497: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 08:34:00.524: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 08:34:02.498: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 08:34:02.502: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 08:34:04.498: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 08:34:04.524: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 08:34:06.497: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 08:34:06.501: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 08:34:08.498: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 08:34:08.501: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 08:34:10.498: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 08:34:10.502: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 08:34:12.497: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 08:34:12.548: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 08:34:14.497: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 08:34:14.508: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 08:34:16.497: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 08:34:16.501: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 08:34:18.497: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 08:34:18.501: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 08:34:20.498: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 08:34:20.510: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 08:34:22.497: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 08:34:22.502: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 08:34:24.497: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 08:34:24.502: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 11 08:34:26.497: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 11 08:34:26.501: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:34:26.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2183" for this suite.
Aug 11 08:34:48.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:34:48.590: INFO: namespace container-lifecycle-hook-2183 deletion completed in 22.084091222s

• [SLOW TEST:58.318 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:34:48.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 11 08:34:48.682: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:34:55.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6903" for this suite.
Aug 11 08:35:01.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:35:01.131: INFO: namespace init-container-6903 deletion completed in 6.096481105s

• [SLOW TEST:12.540 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:35:01.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-b8c0f4c5-bf0d-449b-9373-009a33b2d01b in namespace container-probe-9265
Aug 11 08:35:05.223: INFO: Started pod liveness-b8c0f4c5-bf0d-449b-9373-009a33b2d01b in namespace container-probe-9265
STEP: checking the pod's current state and verifying that restartCount is present
Aug 11 08:35:05.226: INFO: Initial restart count of pod liveness-b8c0f4c5-bf0d-449b-9373-009a33b2d01b is 0
Aug 11 08:35:23.265: INFO: Restart count of pod container-probe-9265/liveness-b8c0f4c5-bf0d-449b-9373-009a33b2d01b is now 1 (18.039265741s elapsed)
Aug 11 08:35:43.306: INFO: Restart count of pod container-probe-9265/liveness-b8c0f4c5-bf0d-449b-9373-009a33b2d01b is now 2 (38.07946168s elapsed)
Aug 11 08:36:03.348: INFO: Restart count of pod container-probe-9265/liveness-b8c0f4c5-bf0d-449b-9373-009a33b2d01b is now 3 (58.122269912s elapsed)
Aug 11 08:36:23.398: INFO: Restart count of pod container-probe-9265/liveness-b8c0f4c5-bf0d-449b-9373-009a33b2d01b is now 4 (1m18.171841109s elapsed)
Aug 11 08:37:25.570: INFO: Restart count of pod container-probe-9265/liveness-b8c0f4c5-bf0d-449b-9373-009a33b2d01b is now 5 (2m20.344337556s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:37:25.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9265" for this suite.
Aug 11 08:37:31.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:37:31.727: INFO: namespace container-probe-9265 deletion completed in 6.105712467s

• [SLOW TEST:150.596 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:37:31.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-764e699b-573e-495b-8724-34b7c4a03b4d
STEP: Creating a pod to test consume secrets
Aug 11 08:37:31.815: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8c88a501-17f9-44f6-984c-0d4d1f6ec02e" in namespace "projected-29" to be "success or failure"
Aug 11 08:37:31.880: INFO: Pod "pod-projected-secrets-8c88a501-17f9-44f6-984c-0d4d1f6ec02e": Phase="Pending", Reason="", readiness=false. Elapsed: 65.353793ms
Aug 11 08:37:33.885: INFO: Pod "pod-projected-secrets-8c88a501-17f9-44f6-984c-0d4d1f6ec02e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069851924s
Aug 11 08:37:35.889: INFO: Pod "pod-projected-secrets-8c88a501-17f9-44f6-984c-0d4d1f6ec02e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074215658s
STEP: Saw pod success
Aug 11 08:37:35.889: INFO: Pod "pod-projected-secrets-8c88a501-17f9-44f6-984c-0d4d1f6ec02e" satisfied condition "success or failure"
Aug 11 08:37:35.892: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-8c88a501-17f9-44f6-984c-0d4d1f6ec02e container projected-secret-volume-test: 
STEP: delete the pod
Aug 11 08:37:35.934: INFO: Waiting for pod pod-projected-secrets-8c88a501-17f9-44f6-984c-0d4d1f6ec02e to disappear
Aug 11 08:37:36.006: INFO: Pod pod-projected-secrets-8c88a501-17f9-44f6-984c-0d4d1f6ec02e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:37:36.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-29" for this suite.
Aug 11 08:37:42.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:37:42.120: INFO: namespace projected-29 deletion completed in 6.110534693s

• [SLOW TEST:10.393 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:37:42.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Aug 11 08:37:42.221: INFO: Waiting up to 5m0s for pod "var-expansion-0b5f723a-6e06-47ef-b667-241e0df4f7b1" in namespace "var-expansion-8811" to be "success or failure"
Aug 11 08:37:42.225: INFO: Pod "var-expansion-0b5f723a-6e06-47ef-b667-241e0df4f7b1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.117333ms
Aug 11 08:37:44.229: INFO: Pod "var-expansion-0b5f723a-6e06-47ef-b667-241e0df4f7b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007517209s
Aug 11 08:37:46.233: INFO: Pod "var-expansion-0b5f723a-6e06-47ef-b667-241e0df4f7b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011805775s
STEP: Saw pod success
Aug 11 08:37:46.233: INFO: Pod "var-expansion-0b5f723a-6e06-47ef-b667-241e0df4f7b1" satisfied condition "success or failure"
Aug 11 08:37:46.237: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-0b5f723a-6e06-47ef-b667-241e0df4f7b1 container dapi-container: 
STEP: delete the pod
Aug 11 08:37:46.283: INFO: Waiting for pod var-expansion-0b5f723a-6e06-47ef-b667-241e0df4f7b1 to disappear
Aug 11 08:37:46.291: INFO: Pod var-expansion-0b5f723a-6e06-47ef-b667-241e0df4f7b1 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:37:46.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8811" for this suite.
Aug 11 08:37:52.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:37:52.393: INFO: namespace var-expansion-8811 deletion completed in 6.098585886s

• [SLOW TEST:10.273 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:37:52.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-gn97
STEP: Creating a pod to test atomic-volume-subpath
Aug 11 08:37:52.518: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-gn97" in namespace "subpath-1944" to be "success or failure"
Aug 11 08:37:52.525: INFO: Pod "pod-subpath-test-downwardapi-gn97": Phase="Pending", Reason="", readiness=false. Elapsed: 7.297219ms
Aug 11 08:37:54.529: INFO: Pod "pod-subpath-test-downwardapi-gn97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011637563s
Aug 11 08:37:56.534: INFO: Pod "pod-subpath-test-downwardapi-gn97": Phase="Running", Reason="", readiness=true. Elapsed: 4.016071724s
Aug 11 08:37:58.538: INFO: Pod "pod-subpath-test-downwardapi-gn97": Phase="Running", Reason="", readiness=true. Elapsed: 6.020318802s
Aug 11 08:38:00.542: INFO: Pod "pod-subpath-test-downwardapi-gn97": Phase="Running", Reason="", readiness=true. Elapsed: 8.024375157s
Aug 11 08:38:02.546: INFO: Pod "pod-subpath-test-downwardapi-gn97": Phase="Running", Reason="", readiness=true. Elapsed: 10.02821699s
Aug 11 08:38:04.550: INFO: Pod "pod-subpath-test-downwardapi-gn97": Phase="Running", Reason="", readiness=true. Elapsed: 12.032303495s
Aug 11 08:38:06.554: INFO: Pod "pod-subpath-test-downwardapi-gn97": Phase="Running", Reason="", readiness=true. Elapsed: 14.036747029s
Aug 11 08:38:08.565: INFO: Pod "pod-subpath-test-downwardapi-gn97": Phase="Running", Reason="", readiness=true. Elapsed: 16.047418975s
Aug 11 08:38:10.569: INFO: Pod "pod-subpath-test-downwardapi-gn97": Phase="Running", Reason="", readiness=true. Elapsed: 18.051452859s
Aug 11 08:38:12.573: INFO: Pod "pod-subpath-test-downwardapi-gn97": Phase="Running", Reason="", readiness=true. Elapsed: 20.055509254s
Aug 11 08:38:14.578: INFO: Pod "pod-subpath-test-downwardapi-gn97": Phase="Running", Reason="", readiness=true. Elapsed: 22.059808013s
Aug 11 08:38:16.582: INFO: Pod "pod-subpath-test-downwardapi-gn97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.063899724s
STEP: Saw pod success
Aug 11 08:38:16.582: INFO: Pod "pod-subpath-test-downwardapi-gn97" satisfied condition "success or failure"
Aug 11 08:38:16.584: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-gn97 container test-container-subpath-downwardapi-gn97: 
STEP: delete the pod
Aug 11 08:38:16.726: INFO: Waiting for pod pod-subpath-test-downwardapi-gn97 to disappear
Aug 11 08:38:16.844: INFO: Pod pod-subpath-test-downwardapi-gn97 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-gn97
Aug 11 08:38:16.845: INFO: Deleting pod "pod-subpath-test-downwardapi-gn97" in namespace "subpath-1944"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:38:16.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1944" for this suite.
Aug 11 08:38:22.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:38:22.988: INFO: namespace subpath-1944 deletion completed in 6.136871354s

• [SLOW TEST:30.595 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:38:22.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 08:38:23.036: INFO: Creating deployment "test-recreate-deployment"
Aug 11 08:38:23.046: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 11 08:38:23.072: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 11 08:38:25.087: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 11 08:38:25.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732731903, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732731903, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732731903, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732731903, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 08:38:27.094: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 11 08:38:27.102: INFO: Updating deployment test-recreate-deployment
Aug 11 08:38:27.102: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 11 08:38:27.328: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-6312,SelfLink:/apis/apps/v1/namespaces/deployment-6312/deployments/test-recreate-deployment,UID:b6024cd7-abb3-4498-be61-136faeea0f62,ResourceVersion:4155452,Generation:2,CreationTimestamp:2020-08-11 08:38:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-08-11 08:38:27 +0000 UTC 2020-08-11 08:38:27 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-11 08:38:27 +0000 UTC 2020-08-11 08:38:23 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Aug 11 08:38:27.352: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-6312,SelfLink:/apis/apps/v1/namespaces/deployment-6312/replicasets/test-recreate-deployment-5c8c9cc69d,UID:2cf8b100-0eb4-4fd2-8932-5a4a15fc7480,ResourceVersion:4155450,Generation:1,CreationTimestamp:2020-08-11 08:38:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment b6024cd7-abb3-4498-be61-136faeea0f62 0xc002795937 0xc002795938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 11 08:38:27.352: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 11 08:38:27.352: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-6312,SelfLink:/apis/apps/v1/namespaces/deployment-6312/replicasets/test-recreate-deployment-6df85df6b9,UID:50dcbc75-4f1a-4eda-b394-e131777b611c,ResourceVersion:4155441,Generation:2,CreationTimestamp:2020-08-11 08:38:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment b6024cd7-abb3-4498-be61-136faeea0f62 0xc002795a07 0xc002795a08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 11 08:38:27.356: INFO: Pod "test-recreate-deployment-5c8c9cc69d-ppncb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-ppncb,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-6312,SelfLink:/api/v1/namespaces/deployment-6312/pods/test-recreate-deployment-5c8c9cc69d-ppncb,UID:57e1c686-ee23-4c34-a4e5-e539119cf20c,ResourceVersion:4155453,Generation:0,CreationTimestamp:2020-08-11 08:38:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 2cf8b100-0eb4-4fd2-8932-5a4a15fc7480 0xc00005b337 0xc00005b338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xvtxd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xvtxd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xvtxd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00005b680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00005b6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:38:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:38:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:38:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 08:38:27 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-11 08:38:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:38:27.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6312" for this suite.
Aug 11 08:38:33.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:38:33.778: INFO: namespace deployment-6312 deletion completed in 6.418950706s

• [SLOW TEST:10.790 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:38:33.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-mxgm
STEP: Creating a pod to test atomic-volume-subpath
Aug 11 08:38:33.905: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-mxgm" in namespace "subpath-8571" to be "success or failure"
Aug 11 08:38:33.918: INFO: Pod "pod-subpath-test-projected-mxgm": Phase="Pending", Reason="", readiness=false. Elapsed: 13.060627ms
Aug 11 08:38:35.922: INFO: Pod "pod-subpath-test-projected-mxgm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017047305s
Aug 11 08:38:37.926: INFO: Pod "pod-subpath-test-projected-mxgm": Phase="Running", Reason="", readiness=true. Elapsed: 4.021018087s
Aug 11 08:38:39.930: INFO: Pod "pod-subpath-test-projected-mxgm": Phase="Running", Reason="", readiness=true. Elapsed: 6.024975414s
Aug 11 08:38:41.934: INFO: Pod "pod-subpath-test-projected-mxgm": Phase="Running", Reason="", readiness=true. Elapsed: 8.028581767s
Aug 11 08:38:43.938: INFO: Pod "pod-subpath-test-projected-mxgm": Phase="Running", Reason="", readiness=true. Elapsed: 10.032641362s
Aug 11 08:38:45.942: INFO: Pod "pod-subpath-test-projected-mxgm": Phase="Running", Reason="", readiness=true. Elapsed: 12.036714714s
Aug 11 08:38:47.947: INFO: Pod "pod-subpath-test-projected-mxgm": Phase="Running", Reason="", readiness=true. Elapsed: 14.041636743s
Aug 11 08:38:49.951: INFO: Pod "pod-subpath-test-projected-mxgm": Phase="Running", Reason="", readiness=true. Elapsed: 16.045772578s
Aug 11 08:38:51.955: INFO: Pod "pod-subpath-test-projected-mxgm": Phase="Running", Reason="", readiness=true. Elapsed: 18.050241534s
Aug 11 08:38:53.960: INFO: Pod "pod-subpath-test-projected-mxgm": Phase="Running", Reason="", readiness=true. Elapsed: 20.054830667s
Aug 11 08:38:55.963: INFO: Pod "pod-subpath-test-projected-mxgm": Phase="Running", Reason="", readiness=true. Elapsed: 22.058410683s
Aug 11 08:38:57.975: INFO: Pod "pod-subpath-test-projected-mxgm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.070128479s
STEP: Saw pod success
Aug 11 08:38:57.975: INFO: Pod "pod-subpath-test-projected-mxgm" satisfied condition "success or failure"
Aug 11 08:38:57.978: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-mxgm container test-container-subpath-projected-mxgm: 
STEP: delete the pod
Aug 11 08:38:58.006: INFO: Waiting for pod pod-subpath-test-projected-mxgm to disappear
Aug 11 08:38:58.079: INFO: Pod pod-subpath-test-projected-mxgm no longer exists
STEP: Deleting pod pod-subpath-test-projected-mxgm
Aug 11 08:38:58.079: INFO: Deleting pod "pod-subpath-test-projected-mxgm" in namespace "subpath-8571"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:38:58.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8571" for this suite.
Aug 11 08:39:04.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:39:04.187: INFO: namespace subpath-8571 deletion completed in 6.100823105s

• [SLOW TEST:30.408 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:39:04.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-5c72c3ca-447e-402c-b116-c2e653dc9e76 in namespace container-probe-3297
Aug 11 08:39:08.292: INFO: Started pod test-webserver-5c72c3ca-447e-402c-b116-c2e653dc9e76 in namespace container-probe-3297
STEP: checking the pod's current state and verifying that restartCount is present
Aug 11 08:39:08.295: INFO: Initial restart count of pod test-webserver-5c72c3ca-447e-402c-b116-c2e653dc9e76 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:43:09.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3297" for this suite.
Aug 11 08:43:15.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:43:15.142: INFO: namespace container-probe-3297 deletion completed in 6.111931182s

• [SLOW TEST:250.955 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:43:15.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Aug 11 08:43:15.195: INFO: Waiting up to 5m0s for pod "client-containers-ca6cd95f-23a0-44c7-a1c4-dc63d480a6e6" in namespace "containers-7223" to be "success or failure"
Aug 11 08:43:15.224: INFO: Pod "client-containers-ca6cd95f-23a0-44c7-a1c4-dc63d480a6e6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.964756ms
Aug 11 08:43:17.228: INFO: Pod "client-containers-ca6cd95f-23a0-44c7-a1c4-dc63d480a6e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033128838s
Aug 11 08:43:19.232: INFO: Pod "client-containers-ca6cd95f-23a0-44c7-a1c4-dc63d480a6e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037469573s
STEP: Saw pod success
Aug 11 08:43:19.232: INFO: Pod "client-containers-ca6cd95f-23a0-44c7-a1c4-dc63d480a6e6" satisfied condition "success or failure"
Aug 11 08:43:19.235: INFO: Trying to get logs from node iruya-worker2 pod client-containers-ca6cd95f-23a0-44c7-a1c4-dc63d480a6e6 container test-container: 
STEP: delete the pod
Aug 11 08:43:19.324: INFO: Waiting for pod client-containers-ca6cd95f-23a0-44c7-a1c4-dc63d480a6e6 to disappear
Aug 11 08:43:19.430: INFO: Pod client-containers-ca6cd95f-23a0-44c7-a1c4-dc63d480a6e6 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:43:19.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7223" for this suite.
Aug 11 08:43:25.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:43:25.527: INFO: namespace containers-7223 deletion completed in 6.091990449s

• [SLOW TEST:10.384 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:43:25.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-707d3282-c288-4f66-8cb3-6d5820745ddd
STEP: Creating secret with name s-test-opt-upd-cdaf0343-5822-45fb-8fbb-0e96eb54e91d
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-707d3282-c288-4f66-8cb3-6d5820745ddd
STEP: Updating secret s-test-opt-upd-cdaf0343-5822-45fb-8fbb-0e96eb54e91d
STEP: Creating secret with name s-test-opt-create-285e1cd9-5629-442b-a376-28fdc4c0e312
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:43:33.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7916" for this suite.
Aug 11 08:43:55.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:43:55.867: INFO: namespace projected-7916 deletion completed in 22.093911704s

• [SLOW TEST:30.340 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:43:55.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 11 08:44:00.525: INFO: Successfully updated pod "annotationupdate8b39de44-d178-4997-a5af-b071f149b1f8"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:44:04.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-432" for this suite.
Aug 11 08:44:26.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:44:26.678: INFO: namespace projected-432 deletion completed in 22.094795927s

• [SLOW TEST:30.811 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:44:26.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 11 08:44:26.759: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Aug 11 08:44:27.598: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 11 08:44:29.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732732267, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732732267, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732732267, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732732267, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 08:44:31.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732732267, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732732267, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732732267, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732732267, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 08:44:34.441: INFO: Waited 621.96571ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:44:34.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-377" for this suite.
Aug 11 08:44:41.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:44:41.255: INFO: namespace aggregator-377 deletion completed in 6.301460047s

• [SLOW TEST:14.577 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:44:41.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0811 08:44:52.991120       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 11 08:44:52.991: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:44:52.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4042" for this suite.
Aug 11 08:45:03.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:45:03.128: INFO: namespace gc-4042 deletion completed in 10.133508203s

• [SLOW TEST:21.872 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:45:03.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 11 08:45:03.193: INFO: Waiting up to 5m0s for pod "pod-9abceec0-ad1e-486f-8bd2-72911485468c" in namespace "emptydir-85" to be "success or failure"
Aug 11 08:45:03.196: INFO: Pod "pod-9abceec0-ad1e-486f-8bd2-72911485468c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.585839ms
Aug 11 08:45:05.200: INFO: Pod "pod-9abceec0-ad1e-486f-8bd2-72911485468c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007291506s
Aug 11 08:45:07.204: INFO: Pod "pod-9abceec0-ad1e-486f-8bd2-72911485468c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011280417s
STEP: Saw pod success
Aug 11 08:45:07.204: INFO: Pod "pod-9abceec0-ad1e-486f-8bd2-72911485468c" satisfied condition "success or failure"
Aug 11 08:45:07.207: INFO: Trying to get logs from node iruya-worker pod pod-9abceec0-ad1e-486f-8bd2-72911485468c container test-container: 
STEP: delete the pod
Aug 11 08:45:07.263: INFO: Waiting for pod pod-9abceec0-ad1e-486f-8bd2-72911485468c to disappear
Aug 11 08:45:07.268: INFO: Pod pod-9abceec0-ad1e-486f-8bd2-72911485468c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:45:07.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-85" for this suite.
Aug 11 08:45:13.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:45:13.352: INFO: namespace emptydir-85 deletion completed in 6.080631949s

• [SLOW TEST:10.224 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:45:13.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:45:18.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6310" for this suite.
Aug 11 08:45:40.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:45:40.596: INFO: namespace replication-controller-6310 deletion completed in 22.124482008s

• [SLOW TEST:27.243 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:45:40.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-373
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 11 08:45:40.637: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 11 08:46:06.841: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.136:8080/dial?request=hostName&protocol=udp&host=10.244.1.135&port=8081&tries=1'] Namespace:pod-network-test-373 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 08:46:06.841: INFO: >>> kubeConfig: /root/.kube/config
I0811 08:46:06.880970       6 log.go:172] (0xc0009ef550) (0xc002bfd040) Create stream
I0811 08:46:06.881004       6 log.go:172] (0xc0009ef550) (0xc002bfd040) Stream added, broadcasting: 1
I0811 08:46:06.883287       6 log.go:172] (0xc0009ef550) Reply frame received for 1
I0811 08:46:06.883335       6 log.go:172] (0xc0009ef550) (0xc002bfd180) Create stream
I0811 08:46:06.883355       6 log.go:172] (0xc0009ef550) (0xc002bfd180) Stream added, broadcasting: 3
I0811 08:46:06.884287       6 log.go:172] (0xc0009ef550) Reply frame received for 3
I0811 08:46:06.884330       6 log.go:172] (0xc0009ef550) (0xc002bfd2c0) Create stream
I0811 08:46:06.884345       6 log.go:172] (0xc0009ef550) (0xc002bfd2c0) Stream added, broadcasting: 5
I0811 08:46:06.885091       6 log.go:172] (0xc0009ef550) Reply frame received for 5
I0811 08:46:06.952428       6 log.go:172] (0xc0009ef550) Data frame received for 3
I0811 08:46:06.952456       6 log.go:172] (0xc002bfd180) (3) Data frame handling
I0811 08:46:06.952475       6 log.go:172] (0xc002bfd180) (3) Data frame sent
I0811 08:46:06.953114       6 log.go:172] (0xc0009ef550) Data frame received for 5
I0811 08:46:06.953129       6 log.go:172] (0xc002bfd2c0) (5) Data frame handling
I0811 08:46:06.953403       6 log.go:172] (0xc0009ef550) Data frame received for 3
I0811 08:46:06.953419       6 log.go:172] (0xc002bfd180) (3) Data frame handling
I0811 08:46:06.955148       6 log.go:172] (0xc0009ef550) Data frame received for 1
I0811 08:46:06.955165       6 log.go:172] (0xc002bfd040) (1) Data frame handling
I0811 08:46:06.955179       6 log.go:172] (0xc002bfd040) (1) Data frame sent
I0811 08:46:06.955221       6 log.go:172] (0xc0009ef550) (0xc002bfd040) Stream removed, broadcasting: 1
I0811 08:46:06.955304       6 log.go:172] (0xc0009ef550) (0xc002bfd040) Stream removed, broadcasting: 1
I0811 08:46:06.955322       6 log.go:172] (0xc0009ef550) (0xc002bfd180) Stream removed, broadcasting: 3
I0811 08:46:06.955422       6 log.go:172] (0xc0009ef550) Go away received
I0811 08:46:06.955530       6 log.go:172] (0xc0009ef550) (0xc002bfd2c0) Stream removed, broadcasting: 5
Aug 11 08:46:06.955: INFO: Waiting for endpoints: map[]
Aug 11 08:46:06.958: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.136:8080/dial?request=hostName&protocol=udp&host=10.244.2.54&port=8081&tries=1'] Namespace:pod-network-test-373 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 08:46:06.958: INFO: >>> kubeConfig: /root/.kube/config
I0811 08:46:06.985451       6 log.go:172] (0xc000f54580) (0xc00315c8c0) Create stream
I0811 08:46:06.985483       6 log.go:172] (0xc000f54580) (0xc00315c8c0) Stream added, broadcasting: 1
I0811 08:46:06.989188       6 log.go:172] (0xc000f54580) Reply frame received for 1
I0811 08:46:06.989241       6 log.go:172] (0xc000f54580) (0xc002bfd360) Create stream
I0811 08:46:06.989262       6 log.go:172] (0xc000f54580) (0xc002bfd360) Stream added, broadcasting: 3
I0811 08:46:06.990732       6 log.go:172] (0xc000f54580) Reply frame received for 3
I0811 08:46:06.990790       6 log.go:172] (0xc000f54580) (0xc00315c960) Create stream
I0811 08:46:06.990810       6 log.go:172] (0xc000f54580) (0xc00315c960) Stream added, broadcasting: 5
I0811 08:46:06.992250       6 log.go:172] (0xc000f54580) Reply frame received for 5
I0811 08:46:07.072000       6 log.go:172] (0xc000f54580) Data frame received for 3
I0811 08:46:07.072033       6 log.go:172] (0xc002bfd360) (3) Data frame handling
I0811 08:46:07.072054       6 log.go:172] (0xc002bfd360) (3) Data frame sent
I0811 08:46:07.073303       6 log.go:172] (0xc000f54580) Data frame received for 3
I0811 08:46:07.073341       6 log.go:172] (0xc002bfd360) (3) Data frame handling
I0811 08:46:07.073364       6 log.go:172] (0xc000f54580) Data frame received for 5
I0811 08:46:07.073373       6 log.go:172] (0xc00315c960) (5) Data frame handling
I0811 08:46:07.074997       6 log.go:172] (0xc000f54580) Data frame received for 1
I0811 08:46:07.075017       6 log.go:172] (0xc00315c8c0) (1) Data frame handling
I0811 08:46:07.075032       6 log.go:172] (0xc00315c8c0) (1) Data frame sent
I0811 08:46:07.075046       6 log.go:172] (0xc000f54580) (0xc00315c8c0) Stream removed, broadcasting: 1
I0811 08:46:07.075129       6 log.go:172] (0xc000f54580) Go away received
I0811 08:46:07.075197       6 log.go:172] (0xc000f54580) (0xc00315c8c0) Stream removed, broadcasting: 1
I0811 08:46:07.075239       6 log.go:172] (0xc000f54580) (0xc002bfd360) Stream removed, broadcasting: 3
I0811 08:46:07.075267       6 log.go:172] (0xc000f54580) (0xc00315c960) Stream removed, broadcasting: 5
Aug 11 08:46:07.075: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:46:07.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-373" for this suite.
Aug 11 08:46:29.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:46:29.232: INFO: namespace pod-network-test-373 deletion completed in 22.150806879s

• [SLOW TEST:48.636 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:46:29.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-7540/configmap-test-b0300f6c-287e-46b9-bec5-ee0c7b8a1590
STEP: Creating a pod to test consume configMaps
Aug 11 08:46:29.349: INFO: Waiting up to 5m0s for pod "pod-configmaps-ebdc3d48-e490-4f60-98f8-41fd9d7ee851" in namespace "configmap-7540" to be "success or failure"
Aug 11 08:46:29.351: INFO: Pod "pod-configmaps-ebdc3d48-e490-4f60-98f8-41fd9d7ee851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316109ms
Aug 11 08:46:31.355: INFO: Pod "pod-configmaps-ebdc3d48-e490-4f60-98f8-41fd9d7ee851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006456469s
Aug 11 08:46:33.367: INFO: Pod "pod-configmaps-ebdc3d48-e490-4f60-98f8-41fd9d7ee851": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017914325s
STEP: Saw pod success
Aug 11 08:46:33.367: INFO: Pod "pod-configmaps-ebdc3d48-e490-4f60-98f8-41fd9d7ee851" satisfied condition "success or failure"
Aug 11 08:46:33.370: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-ebdc3d48-e490-4f60-98f8-41fd9d7ee851 container env-test: 
STEP: delete the pod
Aug 11 08:46:33.404: INFO: Waiting for pod pod-configmaps-ebdc3d48-e490-4f60-98f8-41fd9d7ee851 to disappear
Aug 11 08:46:33.414: INFO: Pod pod-configmaps-ebdc3d48-e490-4f60-98f8-41fd9d7ee851 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:46:33.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7540" for this suite.
Aug 11 08:46:39.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:46:39.508: INFO: namespace configmap-7540 deletion completed in 6.090374956s

• [SLOW TEST:10.276 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:46:39.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 11 08:46:39.655: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1005,SelfLink:/api/v1/namespaces/watch-1005/configmaps/e2e-watch-test-label-changed,UID:f94097eb-355d-49b0-a939-b0776bfe00df,ResourceVersion:4157001,Generation:0,CreationTimestamp:2020-08-11 08:46:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 11 08:46:39.655: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1005,SelfLink:/api/v1/namespaces/watch-1005/configmaps/e2e-watch-test-label-changed,UID:f94097eb-355d-49b0-a939-b0776bfe00df,ResourceVersion:4157002,Generation:0,CreationTimestamp:2020-08-11 08:46:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 11 08:46:39.655: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1005,SelfLink:/api/v1/namespaces/watch-1005/configmaps/e2e-watch-test-label-changed,UID:f94097eb-355d-49b0-a939-b0776bfe00df,ResourceVersion:4157003,Generation:0,CreationTimestamp:2020-08-11 08:46:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 11 08:46:49.686: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1005,SelfLink:/api/v1/namespaces/watch-1005/configmaps/e2e-watch-test-label-changed,UID:f94097eb-355d-49b0-a939-b0776bfe00df,ResourceVersion:4157025,Generation:0,CreationTimestamp:2020-08-11 08:46:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 11 08:46:49.686: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1005,SelfLink:/api/v1/namespaces/watch-1005/configmaps/e2e-watch-test-label-changed,UID:f94097eb-355d-49b0-a939-b0776bfe00df,ResourceVersion:4157026,Generation:0,CreationTimestamp:2020-08-11 08:46:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Aug 11 08:46:49.686: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1005,SelfLink:/api/v1/namespaces/watch-1005/configmaps/e2e-watch-test-label-changed,UID:f94097eb-355d-49b0-a939-b0776bfe00df,ResourceVersion:4157027,Generation:0,CreationTimestamp:2020-08-11 08:46:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:46:49.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1005" for this suite.
Aug 11 08:46:55.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:46:55.775: INFO: namespace watch-1005 deletion completed in 6.084553124s

• [SLOW TEST:16.266 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:46:55.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 11 08:46:55.891: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b1bee89d-7357-4392-936d-42d6b621ba14" in namespace "projected-1440" to be "success or failure"
Aug 11 08:46:55.915: INFO: Pod "downwardapi-volume-b1bee89d-7357-4392-936d-42d6b621ba14": Phase="Pending", Reason="", readiness=false. Elapsed: 24.16709ms
Aug 11 08:46:57.919: INFO: Pod "downwardapi-volume-b1bee89d-7357-4392-936d-42d6b621ba14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028423684s
Aug 11 08:46:59.924: INFO: Pod "downwardapi-volume-b1bee89d-7357-4392-936d-42d6b621ba14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033272047s
STEP: Saw pod success
Aug 11 08:46:59.924: INFO: Pod "downwardapi-volume-b1bee89d-7357-4392-936d-42d6b621ba14" satisfied condition "success or failure"
Aug 11 08:46:59.927: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b1bee89d-7357-4392-936d-42d6b621ba14 container client-container: 
STEP: delete the pod
Aug 11 08:46:59.980: INFO: Waiting for pod downwardapi-volume-b1bee89d-7357-4392-936d-42d6b621ba14 to disappear
Aug 11 08:46:59.986: INFO: Pod downwardapi-volume-b1bee89d-7357-4392-936d-42d6b621ba14 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:46:59.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1440" for this suite.
Aug 11 08:47:06.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:47:06.078: INFO: namespace projected-1440 deletion completed in 6.088047356s

• [SLOW TEST:10.303 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:47:06.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Aug 11 08:47:06.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1435'
Aug 11 08:47:09.156: INFO: stderr: ""
Aug 11 08:47:09.156: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 11 08:47:09.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1435'
Aug 11 08:47:09.353: INFO: stderr: ""
Aug 11 08:47:09.353: INFO: stdout: "update-demo-nautilus-gb8p9 update-demo-nautilus-zgspk "
Aug 11 08:47:09.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gb8p9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1435'
Aug 11 08:47:09.483: INFO: stderr: ""
Aug 11 08:47:09.483: INFO: stdout: ""
Aug 11 08:47:09.483: INFO: update-demo-nautilus-gb8p9 is created but not running
Aug 11 08:47:14.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1435'
Aug 11 08:47:14.583: INFO: stderr: ""
Aug 11 08:47:14.583: INFO: stdout: "update-demo-nautilus-gb8p9 update-demo-nautilus-zgspk "
Aug 11 08:47:14.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gb8p9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1435'
Aug 11 08:47:14.681: INFO: stderr: ""
Aug 11 08:47:14.681: INFO: stdout: "true"
Aug 11 08:47:14.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gb8p9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1435'
Aug 11 08:47:14.779: INFO: stderr: ""
Aug 11 08:47:14.779: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 08:47:14.779: INFO: validating pod update-demo-nautilus-gb8p9
Aug 11 08:47:14.783: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 08:47:14.783: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 08:47:14.783: INFO: update-demo-nautilus-gb8p9 is verified up and running
Aug 11 08:47:14.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zgspk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1435'
Aug 11 08:47:14.869: INFO: stderr: ""
Aug 11 08:47:14.869: INFO: stdout: "true"
Aug 11 08:47:14.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zgspk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1435'
Aug 11 08:47:14.962: INFO: stderr: ""
Aug 11 08:47:14.962: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 08:47:14.962: INFO: validating pod update-demo-nautilus-zgspk
Aug 11 08:47:14.966: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 08:47:14.966: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 08:47:14.966: INFO: update-demo-nautilus-zgspk is verified up and running
STEP: scaling down the replication controller
Aug 11 08:47:14.969: INFO: scanned /root for discovery docs: 
Aug 11 08:47:14.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1435'
Aug 11 08:47:16.105: INFO: stderr: ""
Aug 11 08:47:16.105: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 11 08:47:16.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1435'
Aug 11 08:47:16.208: INFO: stderr: ""
Aug 11 08:47:16.208: INFO: stdout: "update-demo-nautilus-gb8p9 update-demo-nautilus-zgspk "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 11 08:47:21.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1435'
Aug 11 08:47:21.310: INFO: stderr: ""
Aug 11 08:47:21.310: INFO: stdout: "update-demo-nautilus-gb8p9 update-demo-nautilus-zgspk "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 11 08:47:26.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1435'
Aug 11 08:47:26.408: INFO: stderr: ""
Aug 11 08:47:26.408: INFO: stdout: "update-demo-nautilus-zgspk "
Aug 11 08:47:26.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zgspk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1435'
Aug 11 08:47:26.510: INFO: stderr: ""
Aug 11 08:47:26.510: INFO: stdout: "true"
Aug 11 08:47:26.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zgspk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1435'
Aug 11 08:47:26.596: INFO: stderr: ""
Aug 11 08:47:26.596: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 08:47:26.596: INFO: validating pod update-demo-nautilus-zgspk
Aug 11 08:47:26.599: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 08:47:26.599: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 08:47:26.599: INFO: update-demo-nautilus-zgspk is verified up and running
STEP: scaling up the replication controller
Aug 11 08:47:26.602: INFO: scanned /root for discovery docs: 
Aug 11 08:47:26.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1435'
Aug 11 08:47:27.774: INFO: stderr: ""
Aug 11 08:47:27.774: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 11 08:47:27.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1435'
Aug 11 08:47:27.878: INFO: stderr: ""
Aug 11 08:47:27.878: INFO: stdout: "update-demo-nautilus-kss5m update-demo-nautilus-zgspk "
Aug 11 08:47:27.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kss5m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1435'
Aug 11 08:47:27.977: INFO: stderr: ""
Aug 11 08:47:27.978: INFO: stdout: ""
Aug 11 08:47:27.978: INFO: update-demo-nautilus-kss5m is created but not running
Aug 11 08:47:32.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1435'
Aug 11 08:47:33.093: INFO: stderr: ""
Aug 11 08:47:33.093: INFO: stdout: "update-demo-nautilus-kss5m update-demo-nautilus-zgspk "
Aug 11 08:47:33.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kss5m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1435'
Aug 11 08:47:33.181: INFO: stderr: ""
Aug 11 08:47:33.181: INFO: stdout: "true"
Aug 11 08:47:33.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kss5m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1435'
Aug 11 08:47:33.274: INFO: stderr: ""
Aug 11 08:47:33.274: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 08:47:33.274: INFO: validating pod update-demo-nautilus-kss5m
Aug 11 08:47:33.279: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 08:47:33.279: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 08:47:33.279: INFO: update-demo-nautilus-kss5m is verified up and running
Aug 11 08:47:33.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zgspk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1435'
Aug 11 08:47:33.378: INFO: stderr: ""
Aug 11 08:47:33.379: INFO: stdout: "true"
Aug 11 08:47:33.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zgspk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1435'
Aug 11 08:47:33.467: INFO: stderr: ""
Aug 11 08:47:33.467: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 11 08:47:33.467: INFO: validating pod update-demo-nautilus-zgspk
Aug 11 08:47:33.470: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 11 08:47:33.470: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 11 08:47:33.470: INFO: update-demo-nautilus-zgspk is verified up and running
STEP: using delete to clean up resources
Aug 11 08:47:33.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1435'
Aug 11 08:47:33.562: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 08:47:33.563: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 11 08:47:33.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1435'
Aug 11 08:47:33.659: INFO: stderr: "No resources found.\n"
Aug 11 08:47:33.659: INFO: stdout: ""
Aug 11 08:47:33.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1435 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 11 08:47:33.807: INFO: stderr: ""
Aug 11 08:47:33.807: INFO: stdout: "update-demo-nautilus-kss5m\nupdate-demo-nautilus-zgspk\n"
Aug 11 08:47:34.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1435'
Aug 11 08:47:34.410: INFO: stderr: "No resources found.\n"
Aug 11 08:47:34.410: INFO: stdout: ""
Aug 11 08:47:34.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1435 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 11 08:47:34.508: INFO: stderr: ""
Aug 11 08:47:34.508: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:47:34.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1435" for this suite.
Aug 11 08:47:56.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:47:56.862: INFO: namespace kubectl-1435 deletion completed in 22.350592925s

• [SLOW TEST:50.782 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:47:56.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 11 08:47:56.985: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a45dcd11-d00c-4a41-8e17-4c21312f2bb4" in namespace "projected-1197" to be "success or failure"
Aug 11 08:47:57.069: INFO: Pod "downwardapi-volume-a45dcd11-d00c-4a41-8e17-4c21312f2bb4": Phase="Pending", Reason="", readiness=false. Elapsed: 83.614155ms
Aug 11 08:47:59.195: INFO: Pod "downwardapi-volume-a45dcd11-d00c-4a41-8e17-4c21312f2bb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209574695s
Aug 11 08:48:01.199: INFO: Pod "downwardapi-volume-a45dcd11-d00c-4a41-8e17-4c21312f2bb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.213856882s
STEP: Saw pod success
Aug 11 08:48:01.199: INFO: Pod "downwardapi-volume-a45dcd11-d00c-4a41-8e17-4c21312f2bb4" satisfied condition "success or failure"
Aug 11 08:48:01.202: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a45dcd11-d00c-4a41-8e17-4c21312f2bb4 container client-container: 
STEP: delete the pod
Aug 11 08:48:01.235: INFO: Waiting for pod downwardapi-volume-a45dcd11-d00c-4a41-8e17-4c21312f2bb4 to disappear
Aug 11 08:48:01.302: INFO: Pod downwardapi-volume-a45dcd11-d00c-4a41-8e17-4c21312f2bb4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:48:01.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1197" for this suite.
Aug 11 08:48:07.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:48:07.393: INFO: namespace projected-1197 deletion completed in 6.086975442s

• [SLOW TEST:10.531 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:48:07.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:49:07.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4682" for this suite.
Aug 11 08:49:29.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:49:29.557: INFO: namespace container-probe-4682 deletion completed in 22.088606749s

• [SLOW TEST:82.163 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:49:29.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 11 08:49:33.710: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:49:33.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2144" for this suite.
Aug 11 08:49:39.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:49:39.846: INFO: namespace container-runtime-2144 deletion completed in 6.091263289s

• [SLOW TEST:10.289 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:49:39.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 11 08:49:39.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4291'
Aug 11 08:49:39.994: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 11 08:49:39.994: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Aug 11 08:49:40.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-4291'
Aug 11 08:49:40.218: INFO: stderr: ""
Aug 11 08:49:40.218: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:49:40.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4291" for this suite.
Aug 11 08:49:46.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:49:46.333: INFO: namespace kubectl-4291 deletion completed in 6.111196175s

• [SLOW TEST:6.486 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:49:46.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 11 08:49:46.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-222'
Aug 11 08:49:46.497: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 11 08:49:46.497: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Aug 11 08:49:48.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-222'
Aug 11 08:49:48.696: INFO: stderr: ""
Aug 11 08:49:48.696: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:49:48.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-222" for this suite.
Aug 11 08:49:54.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:49:54.938: INFO: namespace kubectl-222 deletion completed in 6.234624644s

• [SLOW TEST:8.605 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:49:54.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 11 08:49:54.987: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 11 08:49:54.996: INFO: Waiting for terminating namespaces to be deleted...
Aug 11 08:49:54.998: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 11 08:49:55.006: INFO: kindnet-k7tjm from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug 11 08:49:55.006: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 11 08:49:55.006: INFO: kube-proxy-jzrnl from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug 11 08:49:55.006: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 08:49:55.006: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 11 08:49:55.013: INFO: kube-proxy-9ktgx from kube-system started at 2020-07-19 21:16:10 +0000 UTC (1 container statuses recorded)
Aug 11 08:49:55.013: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 08:49:55.013: INFO: kindnet-8kg9z from kube-system started at 2020-07-19 21:16:09 +0000 UTC (1 container statuses recorded)
Aug 11 08:49:55.013: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-edd77d8a-d95a-4b69-b799-bcd6b6d09d31 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-edd77d8a-d95a-4b69-b799-bcd6b6d09d31 off the node iruya-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-edd77d8a-d95a-4b69-b799-bcd6b6d09d31
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:50:03.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2968" for this suite.
Aug 11 08:50:17.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:50:17.366: INFO: namespace sched-pred-2968 deletion completed in 14.147295807s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:22.427 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:50:17.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8861
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-8861
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8861
Aug 11 08:50:17.476: INFO: Found 0 stateful pods, waiting for 1
Aug 11 08:50:27.481: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 11 08:50:27.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8861 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 11 08:50:27.717: INFO: stderr: "I0811 08:50:27.602344    1770 log.go:172] (0xc00021ea50) (0xc000780960) Create stream\nI0811 08:50:27.602389    1770 log.go:172] (0xc00021ea50) (0xc000780960) Stream added, broadcasting: 1\nI0811 08:50:27.604332    1770 log.go:172] (0xc00021ea50) Reply frame received for 1\nI0811 08:50:27.604401    1770 log.go:172] (0xc00021ea50) (0xc000834000) Create stream\nI0811 08:50:27.604438    1770 log.go:172] (0xc00021ea50) (0xc000834000) Stream added, broadcasting: 3\nI0811 08:50:27.605475    1770 log.go:172] (0xc00021ea50) Reply frame received for 3\nI0811 08:50:27.605514    1770 log.go:172] (0xc00021ea50) (0xc000780a00) Create stream\nI0811 08:50:27.605522    1770 log.go:172] (0xc00021ea50) (0xc000780a00) Stream added, broadcasting: 5\nI0811 08:50:27.606690    1770 log.go:172] (0xc00021ea50) Reply frame received for 5\nI0811 08:50:27.676622    1770 log.go:172] (0xc00021ea50) Data frame received for 5\nI0811 08:50:27.676659    1770 log.go:172] (0xc000780a00) (5) Data frame handling\nI0811 08:50:27.676689    1770 log.go:172] (0xc000780a00) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0811 08:50:27.708362    1770 log.go:172] (0xc00021ea50) Data frame received for 3\nI0811 08:50:27.708391    1770 log.go:172] (0xc000834000) (3) Data frame handling\nI0811 08:50:27.708404    1770 log.go:172] (0xc000834000) (3) Data frame sent\nI0811 08:50:27.708493    1770 log.go:172] (0xc00021ea50) Data frame received for 3\nI0811 08:50:27.708527    1770 log.go:172] (0xc000834000) (3) Data frame handling\nI0811 08:50:27.708875    1770 log.go:172] (0xc00021ea50) Data frame received for 5\nI0811 08:50:27.708912    1770 log.go:172] (0xc000780a00) (5) Data frame handling\nI0811 08:50:27.710928    1770 log.go:172] (0xc00021ea50) Data frame received for 1\nI0811 08:50:27.710947    1770 log.go:172] (0xc000780960) (1) Data frame handling\nI0811 08:50:27.710954    1770 log.go:172] (0xc000780960) (1) Data frame sent\nI0811 08:50:27.710973    1770 log.go:172] (0xc00021ea50) (0xc000780960) Stream removed, broadcasting: 1\nI0811 08:50:27.711235    1770 log.go:172] (0xc00021ea50) (0xc000780960) Stream removed, broadcasting: 1\nI0811 08:50:27.711250    1770 log.go:172] (0xc00021ea50) (0xc000834000) Stream removed, broadcasting: 3\nI0811 08:50:27.711374    1770 log.go:172] (0xc00021ea50) Go away received\nI0811 08:50:27.711431    1770 log.go:172] (0xc00021ea50) (0xc000780a00) Stream removed, broadcasting: 5\n"
Aug 11 08:50:27.718: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 11 08:50:27.718: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 11 08:50:27.722: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 11 08:50:37.727: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 08:50:37.727: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 08:50:37.741: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999621s
Aug 11 08:50:38.746: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997364489s
Aug 11 08:50:39.751: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.992231267s
Aug 11 08:50:40.756: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.987185596s
Aug 11 08:50:41.765: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.982793327s
Aug 11 08:50:42.808: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.973252334s
Aug 11 08:50:43.812: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.930587405s
Aug 11 08:50:44.844: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.926144502s
Aug 11 08:50:45.848: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.894450111s
Aug 11 08:50:46.853: INFO: Verifying statefulset ss doesn't scale past 1 for another 890.765237ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8861
Aug 11 08:50:47.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8861 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 08:50:48.099: INFO: stderr: "I0811 08:50:47.995494    1792 log.go:172] (0xc000116f20) (0xc0005e2be0) Create stream\nI0811 08:50:47.995549    1792 log.go:172] (0xc000116f20) (0xc0005e2be0) Stream added, broadcasting: 1\nI0811 08:50:47.998049    1792 log.go:172] (0xc000116f20) Reply frame received for 1\nI0811 08:50:47.998101    1792 log.go:172] (0xc000116f20) (0xc0008c2000) Create stream\nI0811 08:50:47.998117    1792 log.go:172] (0xc000116f20) (0xc0008c2000) Stream added, broadcasting: 3\nI0811 08:50:47.999204    1792 log.go:172] (0xc000116f20) Reply frame received for 3\nI0811 08:50:47.999239    1792 log.go:172] (0xc000116f20) (0xc0005e2c80) Create stream\nI0811 08:50:47.999251    1792 log.go:172] (0xc000116f20) (0xc0005e2c80) Stream added, broadcasting: 5\nI0811 08:50:48.000360    1792 log.go:172] (0xc000116f20) Reply frame received for 5\nI0811 08:50:48.089985    1792 log.go:172] (0xc000116f20) Data frame received for 3\nI0811 08:50:48.090035    1792 log.go:172] (0xc0008c2000) (3) Data frame handling\nI0811 08:50:48.090070    1792 log.go:172] (0xc0008c2000) (3) Data frame sent\nI0811 08:50:48.090295    1792 log.go:172] (0xc000116f20) Data frame received for 5\nI0811 08:50:48.090319    1792 log.go:172] (0xc0005e2c80) (5) Data frame handling\nI0811 08:50:48.090332    1792 log.go:172] (0xc0005e2c80) (5) Data frame sent\nI0811 08:50:48.090344    1792 log.go:172] (0xc000116f20) Data frame received for 5\nI0811 08:50:48.090355    1792 log.go:172] (0xc0005e2c80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0811 08:50:48.090385    1792 log.go:172] (0xc000116f20) Data frame received for 3\nI0811 08:50:48.090405    1792 log.go:172] (0xc0008c2000) (3) Data frame handling\nI0811 08:50:48.092108    1792 log.go:172] (0xc000116f20) Data frame received for 1\nI0811 08:50:48.092143    1792 log.go:172] (0xc0005e2be0) (1) Data frame handling\nI0811 08:50:48.092165    1792 log.go:172] (0xc0005e2be0) (1) Data frame sent\nI0811 08:50:48.092187    1792 log.go:172] (0xc000116f20) (0xc0005e2be0) Stream removed, broadcasting: 1\nI0811 08:50:48.092633    1792 log.go:172] (0xc000116f20) (0xc0005e2be0) Stream removed, broadcasting: 1\nI0811 08:50:48.092664    1792 log.go:172] (0xc000116f20) (0xc0008c2000) Stream removed, broadcasting: 3\nI0811 08:50:48.092675    1792 log.go:172] (0xc000116f20) (0xc0005e2c80) Stream removed, broadcasting: 5\n"
Aug 11 08:50:48.099: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 11 08:50:48.099: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 11 08:50:48.103: INFO: Found 1 stateful pods, waiting for 3
Aug 11 08:50:58.108: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 08:50:58.108: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 08:50:58.108: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 11 08:50:58.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8861 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 11 08:50:58.338: INFO: stderr: "I0811 08:50:58.236071    1813 log.go:172] (0xc0009484d0) (0xc00043e6e0) Create stream\nI0811 08:50:58.236127    1813 log.go:172] (0xc0009484d0) (0xc00043e6e0) Stream added, broadcasting: 1\nI0811 08:50:58.238598    1813 log.go:172] (0xc0009484d0) Reply frame received for 1\nI0811 08:50:58.238661    1813 log.go:172] (0xc0009484d0) (0xc000404460) Create stream\nI0811 08:50:58.238688    1813 log.go:172] (0xc0009484d0) (0xc000404460) Stream added, broadcasting: 3\nI0811 08:50:58.239840    1813 log.go:172] (0xc0009484d0) Reply frame received for 3\nI0811 08:50:58.239884    1813 log.go:172] (0xc0009484d0) (0xc00068c000) Create stream\nI0811 08:50:58.239900    1813 log.go:172] (0xc0009484d0) (0xc00068c000) Stream added, broadcasting: 5\nI0811 08:50:58.241231    1813 log.go:172] (0xc0009484d0) Reply frame received for 5\nI0811 08:50:58.330658    1813 log.go:172] (0xc0009484d0) Data frame received for 5\nI0811 08:50:58.330713    1813 log.go:172] (0xc00068c000) (5) Data frame handling\nI0811 08:50:58.330734    1813 log.go:172] (0xc00068c000) (5) Data frame sent\nI0811 08:50:58.330751    1813 log.go:172] (0xc0009484d0) Data frame received for 5\nI0811 08:50:58.330766    1813 log.go:172] (0xc00068c000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0811 08:50:58.330804    1813 log.go:172] (0xc0009484d0) Data frame received for 3\nI0811 08:50:58.330835    1813 log.go:172] (0xc000404460) (3) Data frame handling\nI0811 08:50:58.330860    1813 log.go:172] (0xc000404460) (3) Data frame sent\nI0811 08:50:58.330875    1813 log.go:172] (0xc0009484d0) Data frame received for 3\nI0811 08:50:58.330884    1813 log.go:172] (0xc000404460) (3) Data frame handling\nI0811 08:50:58.332164    1813 log.go:172] (0xc0009484d0) Data frame received for 1\nI0811 08:50:58.332179    1813 log.go:172] (0xc00043e6e0) (1) Data frame handling\nI0811 08:50:58.332186    1813 log.go:172] (0xc00043e6e0) (1) Data frame sent\nI0811 08:50:58.332197    1813 log.go:172] (0xc0009484d0) (0xc00043e6e0) Stream removed, broadcasting: 1\nI0811 08:50:58.332454    1813 log.go:172] (0xc0009484d0) (0xc00043e6e0) Stream removed, broadcasting: 1\nI0811 08:50:58.332484    1813 log.go:172] (0xc0009484d0) (0xc000404460) Stream removed, broadcasting: 3\nI0811 08:50:58.332498    1813 log.go:172] (0xc0009484d0) (0xc00068c000) Stream removed, broadcasting: 5\n"
Aug 11 08:50:58.338: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 11 08:50:58.338: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 11 08:50:58.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8861 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 11 08:50:58.596: INFO: stderr: "I0811 08:50:58.466091    1837 log.go:172] (0xc0009360b0) (0xc0008f40a0) Create stream\nI0811 08:50:58.466147    1837 log.go:172] (0xc0009360b0) (0xc0008f40a0) Stream added, broadcasting: 1\nI0811 08:50:58.468988    1837 log.go:172] (0xc0009360b0) Reply frame received for 1\nI0811 08:50:58.469045    1837 log.go:172] (0xc0009360b0) (0xc00093e000) Create stream\nI0811 08:50:58.469061    1837 log.go:172] (0xc0009360b0) (0xc00093e000) Stream added, broadcasting: 3\nI0811 08:50:58.470310    1837 log.go:172] (0xc0009360b0) Reply frame received for 3\nI0811 08:50:58.470365    1837 log.go:172] (0xc0009360b0) (0xc00093e0a0) Create stream\nI0811 08:50:58.470383    1837 log.go:172] (0xc0009360b0) (0xc00093e0a0) Stream added, broadcasting: 5\nI0811 08:50:58.471434    1837 log.go:172] (0xc0009360b0) Reply frame received for 5\nI0811 08:50:58.530101    1837 log.go:172] (0xc0009360b0) Data frame received for 5\nI0811 08:50:58.530124    1837 log.go:172] (0xc00093e0a0) (5) Data frame handling\nI0811 08:50:58.530136    1837 log.go:172] (0xc00093e0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0811 08:50:58.587461    1837 log.go:172] (0xc0009360b0) Data frame received for 3\nI0811 08:50:58.587492    1837 log.go:172] (0xc00093e000) (3) Data frame handling\nI0811 08:50:58.587509    1837 log.go:172] (0xc00093e000) (3) Data frame sent\nI0811 08:50:58.587720    1837 log.go:172] (0xc0009360b0) Data frame received for 5\nI0811 08:50:58.587768    1837 log.go:172] (0xc0009360b0) Data frame received for 3\nI0811 08:50:58.587819    1837 log.go:172] (0xc00093e000) (3) Data frame handling\nI0811 08:50:58.587869    1837 log.go:172] (0xc00093e0a0) (5) Data frame handling\nI0811 08:50:58.590050    1837 log.go:172] (0xc0009360b0) Data frame received for 1\nI0811 08:50:58.590084    1837 log.go:172] (0xc0008f40a0) (1) Data frame handling\nI0811 08:50:58.590117    1837 log.go:172] (0xc0008f40a0) (1) Data frame sent\nI0811 08:50:58.590140    1837 log.go:172] (0xc0009360b0) (0xc0008f40a0) Stream removed, broadcasting: 1\nI0811 08:50:58.590226    1837 log.go:172] (0xc0009360b0) Go away received\nI0811 08:50:58.590641    1837 log.go:172] (0xc0009360b0) (0xc0008f40a0) Stream removed, broadcasting: 1\nI0811 08:50:58.590662    1837 log.go:172] (0xc0009360b0) (0xc00093e000) Stream removed, broadcasting: 3\nI0811 08:50:58.590673    1837 log.go:172] (0xc0009360b0) (0xc00093e0a0) Stream removed, broadcasting: 5\n"
Aug 11 08:50:58.596: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 11 08:50:58.596: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 11 08:50:58.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8861 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 11 08:50:58.856: INFO: stderr: "I0811 08:50:58.714600    1857 log.go:172] (0xc000a46420) (0xc0008ce8c0) Create stream\nI0811 08:50:58.714646    1857 log.go:172] (0xc000a46420) (0xc0008ce8c0) Stream added, broadcasting: 1\nI0811 08:50:58.717946    1857 log.go:172] (0xc000a46420) Reply frame received for 1\nI0811 08:50:58.718049    1857 log.go:172] (0xc000a46420) (0xc00063a140) Create stream\nI0811 08:50:58.718092    1857 log.go:172] (0xc000a46420) (0xc00063a140) Stream added, broadcasting: 3\nI0811 08:50:58.719318    1857 log.go:172] (0xc000a46420) Reply frame received for 3\nI0811 08:50:58.719359    1857 log.go:172] (0xc000a46420) (0xc0008de000) Create stream\nI0811 08:50:58.719377    1857 log.go:172] (0xc000a46420) (0xc0008de000) Stream added, broadcasting: 5\nI0811 08:50:58.720421    1857 log.go:172] (0xc000a46420) Reply frame received for 5\nI0811 08:50:58.784829    1857 log.go:172] (0xc000a46420) Data frame received for 5\nI0811 08:50:58.784858    1857 log.go:172] (0xc0008de000) (5) Data frame handling\nI0811 08:50:58.784874    1857 log.go:172] (0xc0008de000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0811 08:50:58.848418    1857 log.go:172] (0xc000a46420) Data frame received for 5\nI0811 08:50:58.848449    1857 log.go:172] (0xc0008de000) (5) Data frame handling\nI0811 08:50:58.848465    1857 log.go:172] (0xc000a46420) Data frame received for 3\nI0811 08:50:58.848471    1857 log.go:172] (0xc00063a140) (3) Data frame handling\nI0811 08:50:58.848478    1857 log.go:172] (0xc00063a140) (3) Data frame sent\nI0811 08:50:58.848639    1857 log.go:172] (0xc000a46420) Data frame received for 3\nI0811 08:50:58.848672    1857 log.go:172] (0xc00063a140) (3) Data frame handling\nI0811 08:50:58.850802    1857 log.go:172] (0xc000a46420) Data frame received for 1\nI0811 08:50:58.850818    1857 log.go:172] (0xc0008ce8c0) (1) Data frame handling\nI0811 08:50:58.850829    1857 log.go:172] (0xc0008ce8c0) (1) Data frame sent\nI0811 08:50:58.850903    1857 log.go:172] (0xc000a46420) (0xc0008ce8c0) Stream removed, broadcasting: 1\nI0811 08:50:58.851158    1857 log.go:172] (0xc000a46420) (0xc0008ce8c0) Stream removed, broadcasting: 1\nI0811 08:50:58.851175    1857 log.go:172] (0xc000a46420) (0xc00063a140) Stream removed, broadcasting: 3\nI0811 08:50:58.851269    1857 log.go:172] (0xc000a46420) (0xc0008de000) Stream removed, broadcasting: 5\n"
Aug 11 08:50:58.856: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 11 08:50:58.856: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 11 08:50:58.856: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 08:50:58.861: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Aug 11 08:51:08.870: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 08:51:08.870: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 08:51:08.870: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 08:51:08.896: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999966s
Aug 11 08:51:09.902: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980730802s
Aug 11 08:51:10.916: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975348674s
Aug 11 08:51:11.922: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.961112686s
Aug 11 08:51:12.926: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.955194556s
Aug 11 08:51:13.932: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.95084941s
Aug 11 08:51:14.936: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.94536982s
Aug 11 08:51:15.942: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.940894504s
Aug 11 08:51:16.946: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.93542737s
Aug 11 08:51:17.951: INFO: Verifying statefulset ss doesn't scale past 3 for another 930.897056ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8861
Aug 11 08:51:18.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8861 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 08:51:19.174: INFO: stderr: "I0811 08:51:19.088273    1877 log.go:172] (0xc000116e70) (0xc000188820) Create stream\nI0811 08:51:19.088356    1877 log.go:172] (0xc000116e70) (0xc000188820) Stream added, broadcasting: 1\nI0811 08:51:19.090702    1877 log.go:172] (0xc000116e70) Reply frame received for 1\nI0811 08:51:19.090749    1877 log.go:172] (0xc000116e70) (0xc0006f2000) Create stream\nI0811 08:51:19.090766    1877 log.go:172] (0xc000116e70) (0xc0006f2000) Stream added, broadcasting: 3\nI0811 08:51:19.091725    1877 log.go:172] (0xc000116e70) Reply frame received for 3\nI0811 08:51:19.091772    1877 log.go:172] (0xc000116e70) (0xc00078e000) Create stream\nI0811 08:51:19.091787    1877 log.go:172] (0xc000116e70) (0xc00078e000) Stream added, broadcasting: 5\nI0811 08:51:19.092811    1877 log.go:172] (0xc000116e70) Reply frame received for 5\nI0811 08:51:19.166659    1877 log.go:172] (0xc000116e70) Data frame received for 5\nI0811 08:51:19.166715    1877 log.go:172] (0xc00078e000) (5) Data frame handling\nI0811 08:51:19.166728    1877 log.go:172] (0xc00078e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0811 08:51:19.166774    1877 log.go:172] (0xc000116e70) Data frame received for 3\nI0811 08:51:19.166808    1877 log.go:172] (0xc0006f2000) (3) Data frame handling\nI0811 08:51:19.166853    1877 log.go:172] (0xc0006f2000) (3) Data frame sent\nI0811 08:51:19.166870    1877 log.go:172] (0xc000116e70) Data frame received for 3\nI0811 08:51:19.166879    1877 log.go:172] (0xc0006f2000) (3) Data frame handling\nI0811 08:51:19.166961    1877 log.go:172] (0xc000116e70) Data frame received for 5\nI0811 08:51:19.166976    1877 log.go:172] (0xc00078e000) (5) Data frame handling\nI0811 08:51:19.168304    1877 log.go:172] (0xc000116e70) Data frame received for 1\nI0811 08:51:19.168339    1877 log.go:172] (0xc000188820) (1) Data frame handling\nI0811 08:51:19.168356    1877 log.go:172] (0xc000188820) (1) Data frame sent\nI0811 08:51:19.168372    1877 log.go:172] (0xc000116e70) (0xc000188820) Stream removed, broadcasting: 1\nI0811 08:51:19.168391    1877 log.go:172] (0xc000116e70) Go away received\nI0811 08:51:19.169006    1877 log.go:172] (0xc000116e70) (0xc000188820) Stream removed, broadcasting: 1\nI0811 08:51:19.169034    1877 log.go:172] (0xc000116e70) (0xc0006f2000) Stream removed, broadcasting: 3\nI0811 08:51:19.169055    1877 log.go:172] (0xc000116e70) (0xc00078e000) Stream removed, broadcasting: 5\n"
Aug 11 08:51:19.174: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 11 08:51:19.174: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 11 08:51:19.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8861 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 08:51:19.399: INFO: stderr: "I0811 08:51:19.309716    1897 log.go:172] (0xc000a2a370) (0xc00060a8c0) Create stream\nI0811 08:51:19.309770    1897 log.go:172] (0xc000a2a370) (0xc00060a8c0) Stream added, broadcasting: 1\nI0811 08:51:19.312442    1897 log.go:172] (0xc000a2a370) Reply frame received for 1\nI0811 08:51:19.312483    1897 log.go:172] (0xc000a2a370) (0xc0008ee000) Create stream\nI0811 08:51:19.312495    1897 log.go:172] (0xc000a2a370) (0xc0008ee000) Stream added, broadcasting: 3\nI0811 08:51:19.313832    1897 log.go:172] (0xc000a2a370) Reply frame received for 3\nI0811 08:51:19.313900    1897 log.go:172] (0xc000a2a370) (0xc0009fa000) Create stream\nI0811 08:51:19.313923    1897 log.go:172] (0xc000a2a370) (0xc0009fa000) Stream added, broadcasting: 5\nI0811 08:51:19.314947    1897 log.go:172] (0xc000a2a370) Reply frame received for 5\nI0811 08:51:19.392143    1897 log.go:172] (0xc000a2a370) Data frame received for 3\nI0811 08:51:19.392200    1897 log.go:172] (0xc0008ee000) (3) Data frame handling\nI0811 08:51:19.392224    1897 log.go:172] (0xc0008ee000) (3) Data frame sent\nI0811 08:51:19.392239    1897 log.go:172] (0xc000a2a370) Data frame received for 3\nI0811 08:51:19.392252    1897 log.go:172] (0xc0008ee000) (3) Data frame handling\nI0811 08:51:19.392295    1897 log.go:172] (0xc000a2a370) Data frame received for 5\nI0811 08:51:19.392353    1897 log.go:172] (0xc0009fa000) (5) Data frame handling\nI0811 08:51:19.392378    1897 log.go:172] (0xc0009fa000) (5) Data frame sent\nI0811 08:51:19.392402    1897 log.go:172] (0xc000a2a370) Data frame received for 5\nI0811 08:51:19.392415    1897 log.go:172] (0xc0009fa000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0811 08:51:19.393817    1897 log.go:172] (0xc000a2a370) Data frame received for 1\nI0811 08:51:19.393841    1897 log.go:172] (0xc00060a8c0) (1) Data frame handling\nI0811 08:51:19.393854    1897 log.go:172] (0xc00060a8c0) (1) Data frame sent\nI0811 08:51:19.393868    1897 log.go:172] (0xc000a2a370) (0xc00060a8c0) Stream removed, broadcasting: 1\nI0811 08:51:19.393888    1897 log.go:172] (0xc000a2a370) Go away received\nI0811 08:51:19.394229    1897 log.go:172] (0xc000a2a370) (0xc00060a8c0) Stream removed, broadcasting: 1\nI0811 08:51:19.394254    1897 log.go:172] (0xc000a2a370) (0xc0008ee000) Stream removed, broadcasting: 3\nI0811 08:51:19.394269    1897 log.go:172] (0xc000a2a370) (0xc0009fa000) Stream removed, broadcasting: 5\n"
Aug 11 08:51:19.399: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 11 08:51:19.399: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 11 08:51:19.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8861 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 08:51:19.597: INFO: stderr: "I0811 08:51:19.533256    1918 log.go:172] (0xc000a34420) (0xc0008c2640) Create stream\nI0811 08:51:19.533327    1918 log.go:172] (0xc000a34420) (0xc0008c2640) Stream added, broadcasting: 1\nI0811 08:51:19.535733    1918 log.go:172] (0xc000a34420) Reply frame received for 1\nI0811 08:51:19.535771    1918 log.go:172] (0xc000a34420) (0xc000930000) Create stream\nI0811 08:51:19.535781    1918 log.go:172] (0xc000a34420) (0xc000930000) Stream added, broadcasting: 3\nI0811 08:51:19.537088    1918 log.go:172] (0xc000a34420) Reply frame received for 3\nI0811 08:51:19.537543    1918 log.go:172] (0xc000a34420) (0xc000956000) Create stream\nI0811 08:51:19.537591    1918 log.go:172] (0xc000a34420) (0xc000956000) Stream added, broadcasting: 5\nI0811 08:51:19.539075    1918 log.go:172] (0xc000a34420) Reply frame received for 5\nI0811 08:51:19.590014    1918 log.go:172] (0xc000a34420) Data frame received for 3\nI0811 08:51:19.590052    1918 log.go:172] (0xc000930000) (3) Data frame handling\nI0811 08:51:19.590076    1918 log.go:172] (0xc000930000) (3) Data frame sent\nI0811 08:51:19.590091    1918 log.go:172] (0xc000a34420) Data frame received for 3\nI0811 08:51:19.590104    1918 log.go:172] (0xc000930000) (3) Data frame handling\nI0811 08:51:19.590161    1918 log.go:172] (0xc000a34420) Data frame received for 5\nI0811 08:51:19.590188    1918 log.go:172] (0xc000956000) (5) Data frame handling\nI0811 08:51:19.590208    1918 log.go:172] (0xc000956000) (5) Data frame sent\nI0811 08:51:19.590221    1918 log.go:172] (0xc000a34420) Data frame received for 5\nI0811 08:51:19.590236    1918 log.go:172] (0xc000956000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0811 08:51:19.591583    1918 log.go:172] (0xc000a34420) Data frame received for 1\nI0811 08:51:19.591607    1918 log.go:172] (0xc0008c2640) (1) Data frame handling\nI0811 08:51:19.591619    1918 log.go:172] (0xc0008c2640) (1) Data frame sent\nI0811 08:51:19.591631    1918 log.go:172] (0xc000a34420) (0xc0008c2640) Stream removed, broadcasting: 1\nI0811 08:51:19.591661    1918 log.go:172] (0xc000a34420) Go away received\nI0811 08:51:19.591949    1918 log.go:172] (0xc000a34420) (0xc0008c2640) Stream removed, broadcasting: 1\nI0811 08:51:19.591963    1918 log.go:172] (0xc000a34420) (0xc000930000) Stream removed, broadcasting: 3\nI0811 08:51:19.591970    1918 log.go:172] (0xc000a34420) (0xc000956000) Stream removed, broadcasting: 5\n"
Aug 11 08:51:19.598: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 11 08:51:19.598: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 11 08:51:19.598: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 11 08:51:49.631: INFO: Deleting all statefulset in ns statefulset-8861
Aug 11 08:51:49.646: INFO: Scaling statefulset ss to 0
Aug 11 08:51:49.657: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 08:51:49.659: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:51:49.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8861" for this suite.
Aug 11 08:51:55.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:51:55.786: INFO: namespace statefulset-8861 deletion completed in 6.10977046s

• [SLOW TEST:98.420 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:51:55.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0811 08:51:56.925046       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 11 08:51:56.925: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:51:56.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4094" for this suite.
Aug 11 08:52:02.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:52:03.021: INFO: namespace gc-4094 deletion completed in 6.092825379s

• [SLOW TEST:7.235 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:52:03.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 11 08:52:07.608: INFO: Successfully updated pod "pod-update-activedeadlineseconds-69dda39d-8c10-4c6c-97d9-9e9e658f7eba"
Aug 11 08:52:07.609: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-69dda39d-8c10-4c6c-97d9-9e9e658f7eba" in namespace "pods-286" to be "terminated due to deadline exceeded"
Aug 11 08:52:07.617: INFO: Pod "pod-update-activedeadlineseconds-69dda39d-8c10-4c6c-97d9-9e9e658f7eba": Phase="Running", Reason="", readiness=true. Elapsed: 8.325419ms
Aug 11 08:52:09.621: INFO: Pod "pod-update-activedeadlineseconds-69dda39d-8c10-4c6c-97d9-9e9e658f7eba": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.012886492s
Aug 11 08:52:09.622: INFO: Pod "pod-update-activedeadlineseconds-69dda39d-8c10-4c6c-97d9-9e9e658f7eba" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:52:09.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-286" for this suite.
Aug 11 08:52:15.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:52:15.737: INFO: namespace pods-286 deletion completed in 6.111579515s

• [SLOW TEST:12.714 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:52:15.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 08:52:15.882: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c940077f-b5c9-42e0-b489-7da89f3ace3b", Controller:(*bool)(0xc001ae95f2), BlockOwnerDeletion:(*bool)(0xc001ae95f3)}}
Aug 11 08:52:15.905: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"847afb9d-60b8-446c-885a-704d7c284d06", Controller:(*bool)(0xc00300acd2), BlockOwnerDeletion:(*bool)(0xc00300acd3)}}
Aug 11 08:52:16.032: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f4210863-c55f-460f-aab9-99764a3963f0", Controller:(*bool)(0xc002bdb25a), BlockOwnerDeletion:(*bool)(0xc002bdb25b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:52:21.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9270" for this suite.
Aug 11 08:52:27.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:52:27.222: INFO: namespace gc-9270 deletion completed in 6.088570417s

• [SLOW TEST:11.485 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:52:27.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 11 08:52:35.337: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 11 08:52:35.342: INFO: Pod pod-with-prestop-http-hook still exists
Aug 11 08:52:37.342: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 11 08:52:37.346: INFO: Pod pod-with-prestop-http-hook still exists
Aug 11 08:52:39.342: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 11 08:52:39.346: INFO: Pod pod-with-prestop-http-hook still exists
Aug 11 08:52:41.342: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 11 08:52:41.346: INFO: Pod pod-with-prestop-http-hook still exists
Aug 11 08:52:43.342: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 11 08:52:43.349: INFO: Pod pod-with-prestop-http-hook still exists
Aug 11 08:52:45.342: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 11 08:52:45.346: INFO: Pod pod-with-prestop-http-hook still exists
Aug 11 08:52:47.342: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 11 08:52:47.346: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:52:47.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9921" for this suite.
Aug 11 08:53:09.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:53:09.448: INFO: namespace container-lifecycle-hook-9921 deletion completed in 22.089573294s

• [SLOW TEST:42.225 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:53:09.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-aebe8759-243a-4f0d-9784-33112d3e1903
STEP: Creating secret with name secret-projected-all-test-volume-d1adc1a4-6ccb-4f94-af7f-76aa0f83b028
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 11 08:53:09.553: INFO: Waiting up to 5m0s for pod "projected-volume-3e366a53-558e-49c7-82fb-c861d0e2ad67" in namespace "projected-8599" to be "success or failure"
Aug 11 08:53:09.566: INFO: Pod "projected-volume-3e366a53-558e-49c7-82fb-c861d0e2ad67": Phase="Pending", Reason="", readiness=false. Elapsed: 12.900412ms
Aug 11 08:53:11.570: INFO: Pod "projected-volume-3e366a53-558e-49c7-82fb-c861d0e2ad67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017144417s
Aug 11 08:53:13.575: INFO: Pod "projected-volume-3e366a53-558e-49c7-82fb-c861d0e2ad67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021811554s
STEP: Saw pod success
Aug 11 08:53:13.575: INFO: Pod "projected-volume-3e366a53-558e-49c7-82fb-c861d0e2ad67" satisfied condition "success or failure"
Aug 11 08:53:13.578: INFO: Trying to get logs from node iruya-worker pod projected-volume-3e366a53-558e-49c7-82fb-c861d0e2ad67 container projected-all-volume-test: 
STEP: delete the pod
Aug 11 08:53:13.603: INFO: Waiting for pod projected-volume-3e366a53-558e-49c7-82fb-c861d0e2ad67 to disappear
Aug 11 08:53:13.703: INFO: Pod projected-volume-3e366a53-558e-49c7-82fb-c861d0e2ad67 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:53:13.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8599" for this suite.
Aug 11 08:53:19.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:53:19.801: INFO: namespace projected-8599 deletion completed in 6.093434685s

• [SLOW TEST:10.352 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:53:19.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Aug 11 08:53:19.878: INFO: Waiting up to 5m0s for pod "var-expansion-529b143b-43e5-4559-909e-85695918decd" in namespace "var-expansion-9852" to be "success or failure"
Aug 11 08:53:19.888: INFO: Pod "var-expansion-529b143b-43e5-4559-909e-85695918decd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.97107ms
Aug 11 08:53:21.930: INFO: Pod "var-expansion-529b143b-43e5-4559-909e-85695918decd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052332618s
Aug 11 08:53:23.934: INFO: Pod "var-expansion-529b143b-43e5-4559-909e-85695918decd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056187015s
STEP: Saw pod success
Aug 11 08:53:23.934: INFO: Pod "var-expansion-529b143b-43e5-4559-909e-85695918decd" satisfied condition "success or failure"
Aug 11 08:53:23.936: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-529b143b-43e5-4559-909e-85695918decd container dapi-container: 
STEP: delete the pod
Aug 11 08:53:23.955: INFO: Waiting for pod var-expansion-529b143b-43e5-4559-909e-85695918decd to disappear
Aug 11 08:53:23.959: INFO: Pod var-expansion-529b143b-43e5-4559-909e-85695918decd no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:53:23.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9852" for this suite.
Aug 11 08:53:29.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:53:30.063: INFO: namespace var-expansion-9852 deletion completed in 6.099537828s

• [SLOW TEST:10.262 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:53:30.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 11 08:53:30.151: INFO: Waiting up to 5m0s for pod "downward-api-6f03f6a8-b8fd-4e1b-862a-b3c80f58793c" in namespace "downward-api-2430" to be "success or failure"
Aug 11 08:53:30.182: INFO: Pod "downward-api-6f03f6a8-b8fd-4e1b-862a-b3c80f58793c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.414657ms
Aug 11 08:53:32.307: INFO: Pod "downward-api-6f03f6a8-b8fd-4e1b-862a-b3c80f58793c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156041029s
Aug 11 08:53:34.446: INFO: Pod "downward-api-6f03f6a8-b8fd-4e1b-862a-b3c80f58793c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.294380264s
STEP: Saw pod success
Aug 11 08:53:34.446: INFO: Pod "downward-api-6f03f6a8-b8fd-4e1b-862a-b3c80f58793c" satisfied condition "success or failure"
Aug 11 08:53:34.450: INFO: Trying to get logs from node iruya-worker pod downward-api-6f03f6a8-b8fd-4e1b-862a-b3c80f58793c container dapi-container: 
STEP: delete the pod
Aug 11 08:53:34.598: INFO: Waiting for pod downward-api-6f03f6a8-b8fd-4e1b-862a-b3c80f58793c to disappear
Aug 11 08:53:34.625: INFO: Pod downward-api-6f03f6a8-b8fd-4e1b-862a-b3c80f58793c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:53:34.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2430" for this suite.
Aug 11 08:53:40.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:53:40.713: INFO: namespace downward-api-2430 deletion completed in 6.084184687s

• [SLOW TEST:10.650 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:53:40.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Aug 11 08:53:40.826: INFO: Waiting up to 5m0s for pod "client-containers-1ec0de8a-daa8-4c5c-89b8-237e9d13bcb4" in namespace "containers-5909" to be "success or failure"
Aug 11 08:53:40.847: INFO: Pod "client-containers-1ec0de8a-daa8-4c5c-89b8-237e9d13bcb4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.341358ms
Aug 11 08:53:43.038: INFO: Pod "client-containers-1ec0de8a-daa8-4c5c-89b8-237e9d13bcb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211794232s
Aug 11 08:53:45.045: INFO: Pod "client-containers-1ec0de8a-daa8-4c5c-89b8-237e9d13bcb4": Phase="Running", Reason="", readiness=true. Elapsed: 4.219187248s
Aug 11 08:53:47.050: INFO: Pod "client-containers-1ec0de8a-daa8-4c5c-89b8-237e9d13bcb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.223907193s
STEP: Saw pod success
Aug 11 08:53:47.050: INFO: Pod "client-containers-1ec0de8a-daa8-4c5c-89b8-237e9d13bcb4" satisfied condition "success or failure"
Aug 11 08:53:47.053: INFO: Trying to get logs from node iruya-worker2 pod client-containers-1ec0de8a-daa8-4c5c-89b8-237e9d13bcb4 container test-container: 
STEP: delete the pod
Aug 11 08:53:47.076: INFO: Waiting for pod client-containers-1ec0de8a-daa8-4c5c-89b8-237e9d13bcb4 to disappear
Aug 11 08:53:47.080: INFO: Pod client-containers-1ec0de8a-daa8-4c5c-89b8-237e9d13bcb4 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:53:47.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5909" for this suite.
Aug 11 08:53:53.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:53:53.190: INFO: namespace containers-5909 deletion completed in 6.106453639s

• [SLOW TEST:12.476 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:53:53.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-6ba494b8-ea00-47d1-8635-6cc3a8f0b8a6
STEP: Creating a pod to test consume secrets
Aug 11 08:53:53.364: INFO: Waiting up to 5m0s for pod "pod-secrets-63444952-f6f2-4c30-9fe0-969256b08a93" in namespace "secrets-3981" to be "success or failure"
Aug 11 08:53:53.385: INFO: Pod "pod-secrets-63444952-f6f2-4c30-9fe0-969256b08a93": Phase="Pending", Reason="", readiness=false. Elapsed: 20.759763ms
Aug 11 08:53:55.389: INFO: Pod "pod-secrets-63444952-f6f2-4c30-9fe0-969256b08a93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024431601s
Aug 11 08:53:57.393: INFO: Pod "pod-secrets-63444952-f6f2-4c30-9fe0-969256b08a93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028714151s
STEP: Saw pod success
Aug 11 08:53:57.393: INFO: Pod "pod-secrets-63444952-f6f2-4c30-9fe0-969256b08a93" satisfied condition "success or failure"
Aug 11 08:53:57.395: INFO: Trying to get logs from node iruya-worker pod pod-secrets-63444952-f6f2-4c30-9fe0-969256b08a93 container secret-volume-test: 
STEP: delete the pod
Aug 11 08:53:57.591: INFO: Waiting for pod pod-secrets-63444952-f6f2-4c30-9fe0-969256b08a93 to disappear
Aug 11 08:53:57.600: INFO: Pod pod-secrets-63444952-f6f2-4c30-9fe0-969256b08a93 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:53:57.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3981" for this suite.
Aug 11 08:54:03.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:54:03.733: INFO: namespace secrets-3981 deletion completed in 6.129149281s

• [SLOW TEST:10.543 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:54:03.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Aug 11 08:54:03.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8218 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Aug 11 08:54:07.299: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0811 08:54:07.217869    1938 log.go:172] (0xc00096e210) (0xc00083a1e0) Create stream\nI0811 08:54:07.217938    1938 log.go:172] (0xc00096e210) (0xc00083a1e0) Stream added, broadcasting: 1\nI0811 08:54:07.220837    1938 log.go:172] (0xc00096e210) Reply frame received for 1\nI0811 08:54:07.220886    1938 log.go:172] (0xc00096e210) (0xc0009c2960) Create stream\nI0811 08:54:07.220906    1938 log.go:172] (0xc00096e210) (0xc0009c2960) Stream added, broadcasting: 3\nI0811 08:54:07.221956    1938 log.go:172] (0xc00096e210) Reply frame received for 3\nI0811 08:54:07.222020    1938 log.go:172] (0xc00096e210) (0xc000668140) Create stream\nI0811 08:54:07.222036    1938 log.go:172] (0xc00096e210) (0xc000668140) Stream added, broadcasting: 5\nI0811 08:54:07.223241    1938 log.go:172] (0xc00096e210) Reply frame received for 5\nI0811 08:54:07.223278    1938 log.go:172] (0xc00096e210) (0xc00083a280) Create stream\nI0811 08:54:07.223291    1938 log.go:172] (0xc00096e210) (0xc00083a280) Stream added, broadcasting: 7\nI0811 08:54:07.224248    1938 log.go:172] (0xc00096e210) Reply frame received for 7\nI0811 08:54:07.224468    1938 log.go:172] (0xc0009c2960) (3) Writing data frame\nI0811 08:54:07.224606    1938 log.go:172] (0xc0009c2960) (3) Writing data frame\nI0811 08:54:07.225713    1938 log.go:172] (0xc00096e210) Data frame received for 5\nI0811 08:54:07.225733    1938 log.go:172] (0xc000668140) (5) Data frame handling\nI0811 08:54:07.225748    1938 log.go:172] (0xc000668140) (5) Data frame sent\nI0811 08:54:07.226296    1938 log.go:172] (0xc00096e210) Data frame received for 5\nI0811 08:54:07.226323    1938 log.go:172] (0xc000668140) (5) Data frame handling\nI0811 08:54:07.226347    1938 log.go:172] (0xc000668140) (5) Data frame sent\nI0811 08:54:07.278828    1938 log.go:172] (0xc00096e210) Data frame received for 5\nI0811 08:54:07.278870    1938 log.go:172] (0xc000668140) (5) Data frame handling\nI0811 08:54:07.278893    1938 log.go:172] (0xc00096e210) Data frame received for 7\nI0811 08:54:07.278902    1938 log.go:172] (0xc00083a280) (7) Data frame handling\nI0811 08:54:07.279469    1938 log.go:172] (0xc00096e210) Data frame received for 1\nI0811 08:54:07.279492    1938 log.go:172] (0xc00096e210) (0xc0009c2960) Stream removed, broadcasting: 3\nI0811 08:54:07.279512    1938 log.go:172] (0xc00083a1e0) (1) Data frame handling\nI0811 08:54:07.279527    1938 log.go:172] (0xc00083a1e0) (1) Data frame sent\nI0811 08:54:07.279540    1938 log.go:172] (0xc00096e210) (0xc00083a1e0) Stream removed, broadcasting: 1\nI0811 08:54:07.279568    1938 log.go:172] (0xc00096e210) Go away received\nI0811 08:54:07.279688    1938 log.go:172] (0xc00096e210) (0xc00083a1e0) Stream removed, broadcasting: 1\nI0811 08:54:07.279715    1938 log.go:172] (0xc00096e210) (0xc0009c2960) Stream removed, broadcasting: 3\nI0811 08:54:07.279726    1938 log.go:172] (0xc00096e210) (0xc000668140) Stream removed, broadcasting: 5\nI0811 08:54:07.279735    1938 log.go:172] (0xc00096e210) (0xc00083a280) Stream removed, broadcasting: 7\n"
Aug 11 08:54:07.299: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:54:09.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8218" for this suite.
Aug 11 08:54:17.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:54:17.396: INFO: namespace kubectl-8218 deletion completed in 8.085694856s

• [SLOW TEST:13.663 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:54:17.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-53681c62-40c6-4ab4-865d-454453b98eaa
STEP: Creating a pod to test consume configMaps
Aug 11 08:54:17.496: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ecf7b532-9009-493e-b047-45037f515842" in namespace "projected-5709" to be "success or failure"
Aug 11 08:54:17.500: INFO: Pod "pod-projected-configmaps-ecf7b532-9009-493e-b047-45037f515842": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179109ms
Aug 11 08:54:19.504: INFO: Pod "pod-projected-configmaps-ecf7b532-9009-493e-b047-45037f515842": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008031185s
Aug 11 08:54:21.508: INFO: Pod "pod-projected-configmaps-ecf7b532-9009-493e-b047-45037f515842": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011783771s
STEP: Saw pod success
Aug 11 08:54:21.508: INFO: Pod "pod-projected-configmaps-ecf7b532-9009-493e-b047-45037f515842" satisfied condition "success or failure"
Aug 11 08:54:21.514: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-ecf7b532-9009-493e-b047-45037f515842 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 11 08:54:21.597: INFO: Waiting for pod pod-projected-configmaps-ecf7b532-9009-493e-b047-45037f515842 to disappear
Aug 11 08:54:21.615: INFO: Pod pod-projected-configmaps-ecf7b532-9009-493e-b047-45037f515842 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:54:21.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5709" for this suite.
Aug 11 08:54:27.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:54:27.709: INFO: namespace projected-5709 deletion completed in 6.08990952s

• [SLOW TEST:10.312 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:54:27.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Aug 11 08:54:27.774: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Aug 11 08:54:27.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2054'
Aug 11 08:54:28.156: INFO: stderr: ""
Aug 11 08:54:28.156: INFO: stdout: "service/redis-slave created\n"
Aug 11 08:54:28.156: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Aug 11 08:54:28.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2054'
Aug 11 08:54:28.521: INFO: stderr: ""
Aug 11 08:54:28.522: INFO: stdout: "service/redis-master created\n"
Aug 11 08:54:28.522: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 11 08:54:28.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2054'
Aug 11 08:54:28.820: INFO: stderr: ""
Aug 11 08:54:28.820: INFO: stdout: "service/frontend created\n"
Aug 11 08:54:28.820: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Aug 11 08:54:28.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2054'
Aug 11 08:54:29.096: INFO: stderr: ""
Aug 11 08:54:29.096: INFO: stdout: "deployment.apps/frontend created\n"
Aug 11 08:54:29.096: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 11 08:54:29.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2054'
Aug 11 08:54:29.391: INFO: stderr: ""
Aug 11 08:54:29.391: INFO: stdout: "deployment.apps/redis-master created\n"
Aug 11 08:54:29.392: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Aug 11 08:54:29.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2054'
Aug 11 08:54:29.689: INFO: stderr: ""
Aug 11 08:54:29.689: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Aug 11 08:54:29.689: INFO: Waiting for all frontend pods to be Running.
Aug 11 08:54:39.740: INFO: Waiting for frontend to serve content.
Aug 11 08:54:39.754: INFO: Trying to add a new entry to the guestbook.
Aug 11 08:54:39.770: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 11 08:54:39.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2054'
Aug 11 08:54:39.908: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 08:54:39.908: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 11 08:54:39.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2054'
Aug 11 08:54:40.110: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 08:54:40.110: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 11 08:54:40.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2054'
Aug 11 08:54:40.274: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 08:54:40.274: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 11 08:54:40.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2054'
Aug 11 08:54:40.380: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 08:54:40.381: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 11 08:54:40.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2054'
Aug 11 08:54:40.516: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 08:54:40.516: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 11 08:54:40.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2054'
Aug 11 08:54:40.604: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 08:54:40.604: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:54:40.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2054" for this suite.
Aug 11 08:55:20.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:55:20.799: INFO: namespace kubectl-2054 deletion completed in 40.12389146s

• [SLOW TEST:53.090 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:55:20.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Aug 11 08:55:20.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 11 08:55:21.067: INFO: stderr: ""
Aug 11 08:55:21.067: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:55:21.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4021" for this suite.
Aug 11 08:55:27.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:55:27.187: INFO: namespace kubectl-4021 deletion completed in 6.116111074s

• [SLOW TEST:6.387 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:55:27.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Aug 11 08:55:31.283: INFO: Pod pod-hostip-c9f0c9ee-b414-41e5-ad16-cb76fc5e2fb5 has hostIP: 172.18.0.5
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:55:31.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5284" for this suite.
Aug 11 08:55:53.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:55:53.376: INFO: namespace pods-5284 deletion completed in 22.088139363s

• [SLOW TEST:26.188 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:55:53.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-538
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Aug 11 08:55:53.479: INFO: Found 0 stateful pods, waiting for 3
Aug 11 08:56:03.483: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 08:56:03.483: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 08:56:03.483: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Aug 11 08:56:13.483: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 08:56:13.483: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 08:56:13.483: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 11 08:56:13.514: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 11 08:56:23.558: INFO: Updating stateful set ss2
Aug 11 08:56:23.582: INFO: Waiting for Pod statefulset-538/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 11 08:56:33.589: INFO: Waiting for Pod statefulset-538/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Aug 11 08:56:43.775: INFO: Found 2 stateful pods, waiting for 3
Aug 11 08:56:53.780: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 08:56:53.780: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 08:56:53.780: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 11 08:57:03.780: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 08:57:03.781: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 08:57:03.781: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 11 08:57:03.805: INFO: Updating stateful set ss2
Aug 11 08:57:03.920: INFO: Waiting for Pod statefulset-538/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 11 08:57:13.927: INFO: Waiting for Pod statefulset-538/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 11 08:57:23.942: INFO: Updating stateful set ss2
Aug 11 08:57:23.974: INFO: Waiting for StatefulSet statefulset-538/ss2 to complete update
Aug 11 08:57:23.974: INFO: Waiting for Pod statefulset-538/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 11 08:57:33.983: INFO: Waiting for StatefulSet statefulset-538/ss2 to complete update
Aug 11 08:57:33.983: INFO: Waiting for Pod statefulset-538/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 11 08:57:43.980: INFO: Deleting all statefulset in ns statefulset-538
Aug 11 08:57:43.982: INFO: Scaling statefulset ss2 to 0
Aug 11 08:58:24.003: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 08:58:24.006: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:58:24.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-538" for this suite.
Aug 11 08:58:30.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:58:30.151: INFO: namespace statefulset-538 deletion completed in 6.116216255s

• [SLOW TEST:156.775 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:58:30.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Aug 11 08:58:30.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6684'
Aug 11 08:58:33.190: INFO: stderr: ""
Aug 11 08:58:33.190: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Aug 11 08:58:34.195: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 08:58:34.195: INFO: Found 0 / 1
Aug 11 08:58:35.194: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 08:58:35.194: INFO: Found 0 / 1
Aug 11 08:58:36.195: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 08:58:36.195: INFO: Found 0 / 1
Aug 11 08:58:37.195: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 08:58:37.195: INFO: Found 1 / 1
Aug 11 08:58:37.195: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 11 08:58:37.199: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 08:58:37.199: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Aug 11 08:58:37.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zfxvp redis-master --namespace=kubectl-6684'
Aug 11 08:58:37.318: INFO: stderr: ""
Aug 11 08:58:37.318: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 11 Aug 08:58:36.236 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Aug 08:58:36.236 # Server started, Redis version 3.2.12\n1:M 11 Aug 08:58:36.236 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Aug 08:58:36.236 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Aug 11 08:58:37.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zfxvp redis-master --namespace=kubectl-6684 --tail=1'
Aug 11 08:58:37.436: INFO: stderr: ""
Aug 11 08:58:37.436: INFO: stdout: "1:M 11 Aug 08:58:36.236 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Aug 11 08:58:37.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zfxvp redis-master --namespace=kubectl-6684 --limit-bytes=1'
Aug 11 08:58:37.539: INFO: stderr: ""
Aug 11 08:58:37.539: INFO: stdout: " "
STEP: exposing timestamps
Aug 11 08:58:37.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zfxvp redis-master --namespace=kubectl-6684 --tail=1 --timestamps'
Aug 11 08:58:37.655: INFO: stderr: ""
Aug 11 08:58:37.655: INFO: stdout: "2020-08-11T08:58:36.236426117Z 1:M 11 Aug 08:58:36.236 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Aug 11 08:58:40.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zfxvp redis-master --namespace=kubectl-6684 --since=1s'
Aug 11 08:58:40.263: INFO: stderr: ""
Aug 11 08:58:40.263: INFO: stdout: ""
Aug 11 08:58:40.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zfxvp redis-master --namespace=kubectl-6684 --since=24h'
Aug 11 08:58:40.371: INFO: stderr: ""
Aug 11 08:58:40.371: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 11 Aug 08:58:36.236 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Aug 08:58:36.236 # Server started, Redis version 3.2.12\n1:M 11 Aug 08:58:36.236 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Aug 08:58:36.236 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Aug 11 08:58:40.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6684'
Aug 11 08:58:40.462: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 08:58:40.462: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Aug 11 08:58:40.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-6684'
Aug 11 08:58:40.571: INFO: stderr: "No resources found.\n"
Aug 11 08:58:40.571: INFO: stdout: ""
Aug 11 08:58:40.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-6684 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 11 08:58:40.667: INFO: stderr: ""
Aug 11 08:58:40.667: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:58:40.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6684" for this suite.
Aug 11 08:58:46.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:58:47.037: INFO: namespace kubectl-6684 deletion completed in 6.226248364s

• [SLOW TEST:16.885 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:58:47.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-ff6f5d3b-7e07-4380-b00b-105c790859a7
STEP: Creating a pod to test consume configMaps
Aug 11 08:58:47.142: INFO: Waiting up to 5m0s for pod "pod-configmaps-fcecc746-1cb9-4fc6-bd44-cc0b195027b1" in namespace "configmap-4229" to be "success or failure"
Aug 11 08:58:47.171: INFO: Pod "pod-configmaps-fcecc746-1cb9-4fc6-bd44-cc0b195027b1": Phase="Pending", Reason="", readiness=false. Elapsed: 28.260919ms
Aug 11 08:58:49.174: INFO: Pod "pod-configmaps-fcecc746-1cb9-4fc6-bd44-cc0b195027b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031743998s
Aug 11 08:58:51.306: INFO: Pod "pod-configmaps-fcecc746-1cb9-4fc6-bd44-cc0b195027b1": Phase="Running", Reason="", readiness=true. Elapsed: 4.163671973s
Aug 11 08:58:53.310: INFO: Pod "pod-configmaps-fcecc746-1cb9-4fc6-bd44-cc0b195027b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.167376766s
STEP: Saw pod success
Aug 11 08:58:53.310: INFO: Pod "pod-configmaps-fcecc746-1cb9-4fc6-bd44-cc0b195027b1" satisfied condition "success or failure"
Aug 11 08:58:53.312: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-fcecc746-1cb9-4fc6-bd44-cc0b195027b1 container configmap-volume-test: 
STEP: delete the pod
Aug 11 08:58:53.334: INFO: Waiting for pod pod-configmaps-fcecc746-1cb9-4fc6-bd44-cc0b195027b1 to disappear
Aug 11 08:58:53.354: INFO: Pod pod-configmaps-fcecc746-1cb9-4fc6-bd44-cc0b195027b1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:58:53.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4229" for this suite.
Aug 11 08:58:59.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:58:59.521: INFO: namespace configmap-4229 deletion completed in 6.16267009s

• [SLOW TEST:12.484 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:58:59.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-1b7a718e-5fd3-4583-9007-e72c0b8ba716
STEP: Creating a pod to test consume configMaps
Aug 11 08:58:59.673: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-80e8108c-989f-46fb-ad48-4ea2a4ba62a6" in namespace "projected-1105" to be "success or failure"
Aug 11 08:58:59.687: INFO: Pod "pod-projected-configmaps-80e8108c-989f-46fb-ad48-4ea2a4ba62a6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.096592ms
Aug 11 08:59:01.692: INFO: Pod "pod-projected-configmaps-80e8108c-989f-46fb-ad48-4ea2a4ba62a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019201865s
Aug 11 08:59:03.696: INFO: Pod "pod-projected-configmaps-80e8108c-989f-46fb-ad48-4ea2a4ba62a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023233573s
STEP: Saw pod success
Aug 11 08:59:03.696: INFO: Pod "pod-projected-configmaps-80e8108c-989f-46fb-ad48-4ea2a4ba62a6" satisfied condition "success or failure"
Aug 11 08:59:03.699: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-80e8108c-989f-46fb-ad48-4ea2a4ba62a6 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 11 08:59:03.811: INFO: Waiting for pod pod-projected-configmaps-80e8108c-989f-46fb-ad48-4ea2a4ba62a6 to disappear
Aug 11 08:59:03.820: INFO: Pod pod-projected-configmaps-80e8108c-989f-46fb-ad48-4ea2a4ba62a6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:59:03.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1105" for this suite.
Aug 11 08:59:09.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:59:09.915: INFO: namespace projected-1105 deletion completed in 6.092064738s

• [SLOW TEST:10.394 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:59:09.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-0868a53e-c292-4630-a0de-9ca6c6ca4629
STEP: Creating a pod to test consume configMaps
Aug 11 08:59:10.039: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-74f81c41-fdc9-409a-9e79-3a03474f75ed" in namespace "projected-8336" to be "success or failure"
Aug 11 08:59:10.049: INFO: Pod "pod-projected-configmaps-74f81c41-fdc9-409a-9e79-3a03474f75ed": Phase="Pending", Reason="", readiness=false. Elapsed: 9.682265ms
Aug 11 08:59:12.229: INFO: Pod "pod-projected-configmaps-74f81c41-fdc9-409a-9e79-3a03474f75ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190098655s
Aug 11 08:59:14.233: INFO: Pod "pod-projected-configmaps-74f81c41-fdc9-409a-9e79-3a03474f75ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.194247801s
STEP: Saw pod success
Aug 11 08:59:14.233: INFO: Pod "pod-projected-configmaps-74f81c41-fdc9-409a-9e79-3a03474f75ed" satisfied condition "success or failure"
Aug 11 08:59:14.236: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-74f81c41-fdc9-409a-9e79-3a03474f75ed container projected-configmap-volume-test: 
STEP: delete the pod
Aug 11 08:59:14.410: INFO: Waiting for pod pod-projected-configmaps-74f81c41-fdc9-409a-9e79-3a03474f75ed to disappear
Aug 11 08:59:14.444: INFO: Pod pod-projected-configmaps-74f81c41-fdc9-409a-9e79-3a03474f75ed no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:59:14.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8336" for this suite.
Aug 11 08:59:20.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:59:20.588: INFO: namespace projected-8336 deletion completed in 6.139917749s

• [SLOW TEST:10.673 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:59:20.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 11 08:59:20.641: INFO: Waiting up to 5m0s for pod "pod-f5c05b80-c4d8-434f-a322-5333dedd4165" in namespace "emptydir-9703" to be "success or failure"
Aug 11 08:59:20.683: INFO: Pod "pod-f5c05b80-c4d8-434f-a322-5333dedd4165": Phase="Pending", Reason="", readiness=false. Elapsed: 42.312678ms
Aug 11 08:59:22.688: INFO: Pod "pod-f5c05b80-c4d8-434f-a322-5333dedd4165": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046766646s
Aug 11 08:59:24.692: INFO: Pod "pod-f5c05b80-c4d8-434f-a322-5333dedd4165": Phase="Running", Reason="", readiness=true. Elapsed: 4.051030068s
Aug 11 08:59:26.697: INFO: Pod "pod-f5c05b80-c4d8-434f-a322-5333dedd4165": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055535484s
STEP: Saw pod success
Aug 11 08:59:26.697: INFO: Pod "pod-f5c05b80-c4d8-434f-a322-5333dedd4165" satisfied condition "success or failure"
Aug 11 08:59:26.700: INFO: Trying to get logs from node iruya-worker pod pod-f5c05b80-c4d8-434f-a322-5333dedd4165 container test-container: 
STEP: delete the pod
Aug 11 08:59:26.737: INFO: Waiting for pod pod-f5c05b80-c4d8-434f-a322-5333dedd4165 to disappear
Aug 11 08:59:26.741: INFO: Pod pod-f5c05b80-c4d8-434f-a322-5333dedd4165 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:59:26.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9703" for this suite.
Aug 11 08:59:32.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 08:59:32.832: INFO: namespace emptydir-9703 deletion completed in 6.086739861s

• [SLOW TEST:12.243 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 08:59:32.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 11 08:59:41.000: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 11 08:59:41.007: INFO: Pod pod-with-poststart-http-hook still exists
Aug 11 08:59:43.008: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 11 08:59:43.014: INFO: Pod pod-with-poststart-http-hook still exists
Aug 11 08:59:45.008: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 11 08:59:45.012: INFO: Pod pod-with-poststart-http-hook still exists
Aug 11 08:59:47.008: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 11 08:59:47.022: INFO: Pod pod-with-poststart-http-hook still exists
Aug 11 08:59:49.008: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 11 08:59:49.013: INFO: Pod pod-with-poststart-http-hook still exists
Aug 11 08:59:51.008: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 11 08:59:51.011: INFO: Pod pod-with-poststart-http-hook still exists
Aug 11 08:59:53.008: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 11 08:59:53.012: INFO: Pod pod-with-poststart-http-hook still exists
Aug 11 08:59:55.008: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 11 08:59:55.012: INFO: Pod pod-with-poststart-http-hook still exists
Aug 11 08:59:57.008: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 11 08:59:57.011: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 08:59:57.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1755" for this suite.
Aug 11 09:00:19.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:00:19.104: INFO: namespace container-lifecycle-hook-1755 deletion completed in 22.088605835s

• [SLOW TEST:46.271 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:00:19.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0811 09:00:59.896502       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 11 09:00:59.896: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:00:59.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2796" for this suite.
Aug 11 09:01:07.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:01:07.965: INFO: namespace gc-2796 deletion completed in 8.066430372s

• [SLOW TEST:48.861 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:01:07.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-d58a096f-3757-4370-91d2-92458efe8591
Aug 11 09:01:08.170: INFO: Pod name my-hostname-basic-d58a096f-3757-4370-91d2-92458efe8591: Found 0 pods out of 1
Aug 11 09:01:13.173: INFO: Pod name my-hostname-basic-d58a096f-3757-4370-91d2-92458efe8591: Found 1 pods out of 1
Aug 11 09:01:13.173: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d58a096f-3757-4370-91d2-92458efe8591" are running
Aug 11 09:01:13.175: INFO: Pod "my-hostname-basic-d58a096f-3757-4370-91d2-92458efe8591-rmgt9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 09:01:08 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 09:01:12 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 09:01:12 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 09:01:08 +0000 UTC Reason: Message:}])
Aug 11 09:01:13.175: INFO: Trying to dial the pod
Aug 11 09:01:18.186: INFO: Controller my-hostname-basic-d58a096f-3757-4370-91d2-92458efe8591: Got expected result from replica 1 [my-hostname-basic-d58a096f-3757-4370-91d2-92458efe8591-rmgt9]: "my-hostname-basic-d58a096f-3757-4370-91d2-92458efe8591-rmgt9", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:01:18.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9453" for this suite.
Aug 11 09:01:24.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:01:24.499: INFO: namespace replication-controller-9453 deletion completed in 6.309237996s

• [SLOW TEST:16.533 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:01:24.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-b9d57dd6-d6c2-4499-a85d-71e38e082ded
STEP: Creating a pod to test consume secrets
Aug 11 09:01:25.597: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-22366f1d-c5b3-488f-9f69-c4e26b579f75" in namespace "projected-9644" to be "success or failure"
Aug 11 09:01:25.878: INFO: Pod "pod-projected-secrets-22366f1d-c5b3-488f-9f69-c4e26b579f75": Phase="Pending", Reason="", readiness=false. Elapsed: 280.971543ms
Aug 11 09:01:27.883: INFO: Pod "pod-projected-secrets-22366f1d-c5b3-488f-9f69-c4e26b579f75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285380555s
Aug 11 09:01:29.886: INFO: Pod "pod-projected-secrets-22366f1d-c5b3-488f-9f69-c4e26b579f75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.288700374s
STEP: Saw pod success
Aug 11 09:01:29.886: INFO: Pod "pod-projected-secrets-22366f1d-c5b3-488f-9f69-c4e26b579f75" satisfied condition "success or failure"
Aug 11 09:01:29.888: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-22366f1d-c5b3-488f-9f69-c4e26b579f75 container projected-secret-volume-test: 
STEP: delete the pod
Aug 11 09:01:30.114: INFO: Waiting for pod pod-projected-secrets-22366f1d-c5b3-488f-9f69-c4e26b579f75 to disappear
Aug 11 09:01:30.120: INFO: Pod pod-projected-secrets-22366f1d-c5b3-488f-9f69-c4e26b579f75 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:01:30.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9644" for this suite.
Aug 11 09:01:38.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:01:38.198: INFO: namespace projected-9644 deletion completed in 8.07152319s

• [SLOW TEST:13.698 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:01:38.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-ncr2
STEP: Creating a pod to test atomic-volume-subpath
Aug 11 09:01:38.605: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-ncr2" in namespace "subpath-6969" to be "success or failure"
Aug 11 09:01:38.609: INFO: Pod "pod-subpath-test-secret-ncr2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.945125ms
Aug 11 09:01:40.613: INFO: Pod "pod-subpath-test-secret-ncr2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008009315s
Aug 11 09:01:42.618: INFO: Pod "pod-subpath-test-secret-ncr2": Phase="Running", Reason="", readiness=true. Elapsed: 4.012742561s
Aug 11 09:01:44.622: INFO: Pod "pod-subpath-test-secret-ncr2": Phase="Running", Reason="", readiness=true. Elapsed: 6.016478239s
Aug 11 09:01:46.626: INFO: Pod "pod-subpath-test-secret-ncr2": Phase="Running", Reason="", readiness=true. Elapsed: 8.020805408s
Aug 11 09:01:48.631: INFO: Pod "pod-subpath-test-secret-ncr2": Phase="Running", Reason="", readiness=true. Elapsed: 10.026139517s
Aug 11 09:01:50.636: INFO: Pod "pod-subpath-test-secret-ncr2": Phase="Running", Reason="", readiness=true. Elapsed: 12.030524028s
Aug 11 09:01:52.640: INFO: Pod "pod-subpath-test-secret-ncr2": Phase="Running", Reason="", readiness=true. Elapsed: 14.034898852s
Aug 11 09:01:54.645: INFO: Pod "pod-subpath-test-secret-ncr2": Phase="Running", Reason="", readiness=true. Elapsed: 16.039345299s
Aug 11 09:01:56.649: INFO: Pod "pod-subpath-test-secret-ncr2": Phase="Running", Reason="", readiness=true. Elapsed: 18.043568782s
Aug 11 09:01:58.653: INFO: Pod "pod-subpath-test-secret-ncr2": Phase="Running", Reason="", readiness=true. Elapsed: 20.047732075s
Aug 11 09:02:00.658: INFO: Pod "pod-subpath-test-secret-ncr2": Phase="Running", Reason="", readiness=true. Elapsed: 22.052368959s
Aug 11 09:02:02.662: INFO: Pod "pod-subpath-test-secret-ncr2": Phase="Running", Reason="", readiness=true. Elapsed: 24.057001826s
Aug 11 09:02:04.666: INFO: Pod "pod-subpath-test-secret-ncr2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.061203797s
STEP: Saw pod success
Aug 11 09:02:04.667: INFO: Pod "pod-subpath-test-secret-ncr2" satisfied condition "success or failure"
Aug 11 09:02:04.669: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-ncr2 container test-container-subpath-secret-ncr2: 
STEP: delete the pod
Aug 11 09:02:04.706: INFO: Waiting for pod pod-subpath-test-secret-ncr2 to disappear
Aug 11 09:02:04.713: INFO: Pod pod-subpath-test-secret-ncr2 no longer exists
STEP: Deleting pod pod-subpath-test-secret-ncr2
Aug 11 09:02:04.713: INFO: Deleting pod "pod-subpath-test-secret-ncr2" in namespace "subpath-6969"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:02:04.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6969" for this suite.
Aug 11 09:02:10.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:02:10.813: INFO: namespace subpath-6969 deletion completed in 6.094124094s

• [SLOW TEST:32.615 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:02:10.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 11 09:02:10.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8151d033-a8eb-4063-88c1-82edc2488e11" in namespace "projected-480" to be "success or failure"
Aug 11 09:02:10.932: INFO: Pod "downwardapi-volume-8151d033-a8eb-4063-88c1-82edc2488e11": Phase="Pending", Reason="", readiness=false. Elapsed: 18.718821ms
Aug 11 09:02:12.936: INFO: Pod "downwardapi-volume-8151d033-a8eb-4063-88c1-82edc2488e11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022586405s
Aug 11 09:02:14.940: INFO: Pod "downwardapi-volume-8151d033-a8eb-4063-88c1-82edc2488e11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026711826s
STEP: Saw pod success
Aug 11 09:02:14.940: INFO: Pod "downwardapi-volume-8151d033-a8eb-4063-88c1-82edc2488e11" satisfied condition "success or failure"
Aug 11 09:02:14.943: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8151d033-a8eb-4063-88c1-82edc2488e11 container client-container: 
STEP: delete the pod
Aug 11 09:02:15.114: INFO: Waiting for pod downwardapi-volume-8151d033-a8eb-4063-88c1-82edc2488e11 to disappear
Aug 11 09:02:15.126: INFO: Pod downwardapi-volume-8151d033-a8eb-4063-88c1-82edc2488e11 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:02:15.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-480" for this suite.
Aug 11 09:02:21.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:02:21.218: INFO: namespace projected-480 deletion completed in 6.088869109s

• [SLOW TEST:10.405 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:02:21.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Aug 11 09:02:21.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-390'
Aug 11 09:02:21.555: INFO: stderr: ""
Aug 11 09:02:21.555: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 11 09:02:22.560: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 09:02:22.560: INFO: Found 0 / 1
Aug 11 09:02:23.559: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 09:02:23.559: INFO: Found 0 / 1
Aug 11 09:02:24.573: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 09:02:24.573: INFO: Found 0 / 1
Aug 11 09:02:25.559: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 09:02:25.560: INFO: Found 1 / 1
Aug 11 09:02:25.560: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 11 09:02:25.563: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 09:02:25.563: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 11 09:02:25.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-v8l75 --namespace=kubectl-390 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 11 09:02:25.668: INFO: stderr: ""
Aug 11 09:02:25.668: INFO: stdout: "pod/redis-master-v8l75 patched\n"
STEP: checking annotations
Aug 11 09:02:25.707: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 09:02:25.708: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:02:25.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-390" for this suite.
Aug 11 09:02:47.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:02:47.813: INFO: namespace kubectl-390 deletion completed in 22.101591864s

• [SLOW TEST:26.594 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:02:47.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 11 09:02:52.447: INFO: Successfully updated pod "annotationupdateb61f4c67-02c7-47b8-9e84-2ed87694e5b1"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:02:54.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4031" for this suite.
Aug 11 09:03:16.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:03:16.577: INFO: namespace downward-api-4031 deletion completed in 22.101717432s

• [SLOW TEST:28.763 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:03:16.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 09:03:16.723: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 11 09:03:21.729: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 11 09:03:21.729: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 11 09:03:23.733: INFO: Creating deployment "test-rollover-deployment"
Aug 11 09:03:23.753: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 11 09:03:25.759: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 11 09:03:25.765: INFO: Ensure that both replica sets have 1 created replica
Aug 11 09:03:25.771: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 11 09:03:25.777: INFO: Updating deployment test-rollover-deployment
Aug 11 09:03:25.777: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 11 09:03:27.789: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 11 09:03:27.794: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 11 09:03:27.799: INFO: all replica sets need to contain the pod-template-hash label
Aug 11 09:03:27.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733406, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 09:03:29.806: INFO: all replica sets need to contain the pod-template-hash label
Aug 11 09:03:29.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733406, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 09:03:31.809: INFO: all replica sets need to contain the pod-template-hash label
Aug 11 09:03:31.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733409, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 09:03:33.808: INFO: all replica sets need to contain the pod-template-hash label
Aug 11 09:03:33.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733409, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 09:03:35.807: INFO: all replica sets need to contain the pod-template-hash label
Aug 11 09:03:35.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733409, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 09:03:37.807: INFO: all replica sets need to contain the pod-template-hash label
Aug 11 09:03:37.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733409, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 09:03:39.807: INFO: all replica sets need to contain the pod-template-hash label
Aug 11 09:03:39.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733409, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732733403, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 11 09:03:41.807: INFO: 
Aug 11 09:03:41.807: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 11 09:03:41.816: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5536,SelfLink:/apis/apps/v1/namespaces/deployment-5536/deployments/test-rollover-deployment,UID:846f6a03-b9c9-4c3f-8155-c7f89ecafff5,ResourceVersion:4160904,Generation:2,CreationTimestamp:2020-08-11 09:03:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-11 09:03:23 +0000 UTC 2020-08-11 09:03:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-11 09:03:40 +0000 UTC 2020-08-11 09:03:23 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 11 09:03:41.820: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5536,SelfLink:/apis/apps/v1/namespaces/deployment-5536/replicasets/test-rollover-deployment-854595fc44,UID:296c2ad2-dd3e-4c12-8e41-e7f539d07493,ResourceVersion:4160892,Generation:2,CreationTimestamp:2020-08-11 09:03:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 846f6a03-b9c9-4c3f-8155-c7f89ecafff5 0xc002fae2e7 0xc002fae2e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 11 09:03:41.820: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 11 09:03:41.820: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5536,SelfLink:/apis/apps/v1/namespaces/deployment-5536/replicasets/test-rollover-controller,UID:7c89a616-5d04-43ef-b4e9-1416665c8765,ResourceVersion:4160902,Generation:2,CreationTimestamp:2020-08-11 09:03:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 846f6a03-b9c9-4c3f-8155-c7f89ecafff5 0xc002e3df87 0xc002e3df88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 11 09:03:41.820: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5536,SelfLink:/apis/apps/v1/namespaces/deployment-5536/replicasets/test-rollover-deployment-9b8b997cf,UID:9d928115-fb4e-4876-8e7b-f2441eb428dc,ResourceVersion:4160858,Generation:2,CreationTimestamp:2020-08-11 09:03:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 846f6a03-b9c9-4c3f-8155-c7f89ecafff5 0xc002fae590 0xc002fae591}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 11 09:03:41.823: INFO: Pod "test-rollover-deployment-854595fc44-42fq5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-42fq5,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5536,SelfLink:/api/v1/namespaces/deployment-5536/pods/test-rollover-deployment-854595fc44-42fq5,UID:09c27633-9aa4-47f9-aea3-bdb473b05ae4,ResourceVersion:4160870,Generation:0,CreationTimestamp:2020-08-11 09:03:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 296c2ad2-dd3e-4c12-8e41-e7f539d07493 0xc002faf167 0xc002faf168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k65w5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k65w5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-k65w5 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002faf1e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002faf200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:03:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:03:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:03:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:03:25 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.90,StartTime:2020-08-11 09:03:26 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-11 09:03:28 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://b7959aa1b761d46cb3a45702ad5c076013dbfd1f49e5d2d4002b29022e377c7f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:03:41.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5536" for this suite.
Aug 11 09:03:47.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:03:47.907: INFO: namespace deployment-5536 deletion completed in 6.081005667s

• [SLOW TEST:31.330 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:03:47.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 11 09:03:48.195: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:03:55.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-735" for this suite.
Aug 11 09:04:17.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:04:18.063: INFO: namespace init-container-735 deletion completed in 22.079531585s

• [SLOW TEST:30.155 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:04:18.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 11 09:04:18.160: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:04:18.736: INFO: Number of nodes with available pods: 0
Aug 11 09:04:18.736: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:04:19.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:04:19.744: INFO: Number of nodes with available pods: 0
Aug 11 09:04:19.744: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:04:20.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:04:20.745: INFO: Number of nodes with available pods: 0
Aug 11 09:04:20.745: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:04:21.761: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:04:21.769: INFO: Number of nodes with available pods: 0
Aug 11 09:04:21.769: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:04:22.740: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:04:22.742: INFO: Number of nodes with available pods: 0
Aug 11 09:04:22.742: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:04:23.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:04:23.744: INFO: Number of nodes with available pods: 0
Aug 11 09:04:23.745: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:04:24.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:04:24.744: INFO: Number of nodes with available pods: 2
Aug 11 09:04:24.744: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 11 09:04:24.765: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:04:24.770: INFO: Number of nodes with available pods: 2
Aug 11 09:04:24.770: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1526, will wait for the garbage collector to delete the pods
Aug 11 09:04:26.021: INFO: Deleting DaemonSet.extensions daemon-set took: 5.872304ms
Aug 11 09:04:26.422: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.230962ms
Aug 11 09:04:35.124: INFO: Number of nodes with available pods: 0
Aug 11 09:04:35.124: INFO: Number of running nodes: 0, number of available pods: 0
Aug 11 09:04:35.127: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1526/daemonsets","resourceVersion":"4161138"},"items":null}

Aug 11 09:04:35.129: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1526/pods","resourceVersion":"4161138"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:04:35.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1526" for this suite.
Aug 11 09:04:41.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:04:41.254: INFO: namespace daemonsets-1526 deletion completed in 6.112770593s

• [SLOW TEST:23.191 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:04:41.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 11 09:04:47.348: INFO: DNS probes using dns-6062/dns-test-4c2f38aa-66f5-4484-8603-76e19233e6ee succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:04:47.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6062" for this suite.
Aug 11 09:04:55.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:04:55.851: INFO: namespace dns-6062 deletion completed in 8.275259351s

• [SLOW TEST:14.597 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:04:55.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:05:01.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4220" for this suite.
Aug 11 09:05:47.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:05:48.058: INFO: namespace kubelet-test-4220 deletion completed in 46.104327463s

• [SLOW TEST:52.207 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:05:48.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5576
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 11 09:05:48.132: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 11 09:06:12.293: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.176 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5576 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 09:06:12.294: INFO: >>> kubeConfig: /root/.kube/config
I0811 09:06:12.318871       6 log.go:172] (0xc000101600) (0xc000980c80) Create stream
I0811 09:06:12.318908       6 log.go:172] (0xc000101600) (0xc000980c80) Stream added, broadcasting: 1
I0811 09:06:12.322557       6 log.go:172] (0xc000101600) Reply frame received for 1
I0811 09:06:12.322601       6 log.go:172] (0xc000101600) (0xc0016ee000) Create stream
I0811 09:06:12.322618       6 log.go:172] (0xc000101600) (0xc0016ee000) Stream added, broadcasting: 3
I0811 09:06:12.324511       6 log.go:172] (0xc000101600) Reply frame received for 3
I0811 09:06:12.324555       6 log.go:172] (0xc000101600) (0xc0010a2be0) Create stream
I0811 09:06:12.324573       6 log.go:172] (0xc000101600) (0xc0010a2be0) Stream added, broadcasting: 5
I0811 09:06:12.325423       6 log.go:172] (0xc000101600) Reply frame received for 5
I0811 09:06:13.382271       6 log.go:172] (0xc000101600) Data frame received for 5
I0811 09:06:13.382307       6 log.go:172] (0xc0010a2be0) (5) Data frame handling
I0811 09:06:13.382344       6 log.go:172] (0xc000101600) Data frame received for 3
I0811 09:06:13.382358       6 log.go:172] (0xc0016ee000) (3) Data frame handling
I0811 09:06:13.382374       6 log.go:172] (0xc0016ee000) (3) Data frame sent
I0811 09:06:13.382431       6 log.go:172] (0xc000101600) Data frame received for 3
I0811 09:06:13.382463       6 log.go:172] (0xc0016ee000) (3) Data frame handling
I0811 09:06:13.385934       6 log.go:172] (0xc000101600) Data frame received for 1
I0811 09:06:13.385992       6 log.go:172] (0xc000980c80) (1) Data frame handling
I0811 09:06:13.386014       6 log.go:172] (0xc000980c80) (1) Data frame sent
I0811 09:06:13.386039       6 log.go:172] (0xc000101600) (0xc000980c80) Stream removed, broadcasting: 1
I0811 09:06:13.386081       6 log.go:172] (0xc000101600) Go away received
I0811 09:06:13.386174       6 log.go:172] (0xc000101600) (0xc000980c80) Stream removed, broadcasting: 1
I0811 09:06:13.386192       6 log.go:172] (0xc000101600) (0xc0016ee000) Stream removed, broadcasting: 3
I0811 09:06:13.386211       6 log.go:172] (0xc000101600) (0xc0010a2be0) Stream removed, broadcasting: 5
Aug 11 09:06:13.386: INFO: Found all expected endpoints: [netserver-0]
Aug 11 09:06:13.389: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.93 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5576 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 09:06:13.389: INFO: >>> kubeConfig: /root/.kube/config
I0811 09:06:13.412852       6 log.go:172] (0xc0009ef600) (0xc0030a4be0) Create stream
I0811 09:06:13.412891       6 log.go:172] (0xc0009ef600) (0xc0030a4be0) Stream added, broadcasting: 1
I0811 09:06:13.414864       6 log.go:172] (0xc0009ef600) Reply frame received for 1
I0811 09:06:13.414923       6 log.go:172] (0xc0009ef600) (0xc0030a4c80) Create stream
I0811 09:06:13.414939       6 log.go:172] (0xc0009ef600) (0xc0030a4c80) Stream added, broadcasting: 3
I0811 09:06:13.415637       6 log.go:172] (0xc0009ef600) Reply frame received for 3
I0811 09:06:13.415661       6 log.go:172] (0xc0009ef600) (0xc0016ee140) Create stream
I0811 09:06:13.415670       6 log.go:172] (0xc0009ef600) (0xc0016ee140) Stream added, broadcasting: 5
I0811 09:06:13.416432       6 log.go:172] (0xc0009ef600) Reply frame received for 5
I0811 09:06:14.464363       6 log.go:172] (0xc0009ef600) Data frame received for 3
I0811 09:06:14.464429       6 log.go:172] (0xc0030a4c80) (3) Data frame handling
I0811 09:06:14.464459       6 log.go:172] (0xc0030a4c80) (3) Data frame sent
I0811 09:06:14.464535       6 log.go:172] (0xc0009ef600) Data frame received for 3
I0811 09:06:14.464604       6 log.go:172] (0xc0030a4c80) (3) Data frame handling
I0811 09:06:14.465260       6 log.go:172] (0xc0009ef600) Data frame received for 5
I0811 09:06:14.465291       6 log.go:172] (0xc0016ee140) (5) Data frame handling
I0811 09:06:14.468003       6 log.go:172] (0xc0009ef600) Data frame received for 1
I0811 09:06:14.468025       6 log.go:172] (0xc0030a4be0) (1) Data frame handling
I0811 09:06:14.468049       6 log.go:172] (0xc0030a4be0) (1) Data frame sent
I0811 09:06:14.468072       6 log.go:172] (0xc0009ef600) (0xc0030a4be0) Stream removed, broadcasting: 1
I0811 09:06:14.468181       6 log.go:172] (0xc0009ef600) (0xc0030a4be0) Stream removed, broadcasting: 1
I0811 09:06:14.468208       6 log.go:172] (0xc0009ef600) (0xc0030a4c80) Stream removed, broadcasting: 3
I0811 09:06:14.468394       6 log.go:172] (0xc0009ef600) (0xc0016ee140) Stream removed, broadcasting: 5
Aug 11 09:06:14.468: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:06:14.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5576" for this suite.
Aug 11 09:06:38.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:06:38.549: INFO: namespace pod-network-test-5576 deletion completed in 24.075391686s

• [SLOW TEST:50.491 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:06:38.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 11 09:06:39.162: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 11 09:06:39.365: INFO: Waiting for terminating namespaces to be deleted...
Aug 11 09:06:39.923: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 11 09:06:39.928: INFO: kindnet-k7tjm from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug 11 09:06:39.928: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 11 09:06:39.928: INFO: kube-proxy-jzrnl from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug 11 09:06:39.928: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 09:06:39.928: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 11 09:06:39.933: INFO: kube-proxy-9ktgx from kube-system started at 2020-07-19 21:16:10 +0000 UTC (1 container statuses recorded)
Aug 11 09:06:39.933: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 11 09:06:39.933: INFO: kindnet-8kg9z from kube-system started at 2020-07-19 21:16:09 +0000 UTC (1 container statuses recorded)
Aug 11 09:06:39.933: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-worker
STEP: verifying the node has the label node iruya-worker2
Aug 11 09:06:40.167: INFO: Pod kindnet-8kg9z requesting resource cpu=100m on Node iruya-worker2
Aug 11 09:06:40.167: INFO: Pod kindnet-k7tjm requesting resource cpu=100m on Node iruya-worker
Aug 11 09:06:40.167: INFO: Pod kube-proxy-9ktgx requesting resource cpu=0m on Node iruya-worker2
Aug 11 09:06:40.167: INFO: Pod kube-proxy-jzrnl requesting resource cpu=0m on Node iruya-worker
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-433c54d5-54c2-4c2c-95a2-606ff5d63a5e.162a2b75359ff63f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-272/filler-pod-433c54d5-54c2-4c2c-95a2-606ff5d63a5e to iruya-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-433c54d5-54c2-4c2c-95a2-606ff5d63a5e.162a2b75a6fcd984], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-433c54d5-54c2-4c2c-95a2-606ff5d63a5e.162a2b7639a98a88], Reason = [Created], Message = [Created container filler-pod-433c54d5-54c2-4c2c-95a2-606ff5d63a5e]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-433c54d5-54c2-4c2c-95a2-606ff5d63a5e.162a2b76652edad2], Reason = [Started], Message = [Started container filler-pod-433c54d5-54c2-4c2c-95a2-606ff5d63a5e]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4adc16ac-b0cf-4799-8855-98e72dc044ac.162a2b7535bff8cb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-272/filler-pod-4adc16ac-b0cf-4799-8855-98e72dc044ac to iruya-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4adc16ac-b0cf-4799-8855-98e72dc044ac.162a2b75c3d9394b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4adc16ac-b0cf-4799-8855-98e72dc044ac.162a2b768354f131], Reason = [Created], Message = [Created container filler-pod-4adc16ac-b0cf-4799-8855-98e72dc044ac]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4adc16ac-b0cf-4799-8855-98e72dc044ac.162a2b7699d0334c], Reason = [Started], Message = [Started container filler-pod-4adc16ac-b0cf-4799-8855-98e72dc044ac]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162a2b77139a6a2f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:06:50.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-272" for this suite.
Aug 11 09:06:56.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:06:56.396: INFO: namespace sched-pred-272 deletion completed in 6.166585054s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:17.846 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:06:56.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug 11 09:07:02.695: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 11 09:07:17.783: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:07:17.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5043" for this suite.
Aug 11 09:07:25.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:07:25.885: INFO: namespace pods-5043 deletion completed in 8.095040958s

• [SLOW TEST:29.488 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:07:25.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 11 09:07:26.788: INFO: Waiting up to 5m0s for pod "pod-eca5c819-90ea-4e43-b0b8-f804f9c0f6c0" in namespace "emptydir-6975" to be "success or failure"
Aug 11 09:07:27.403: INFO: Pod "pod-eca5c819-90ea-4e43-b0b8-f804f9c0f6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 614.738243ms
Aug 11 09:07:29.407: INFO: Pod "pod-eca5c819-90ea-4e43-b0b8-f804f9c0f6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.618689934s
Aug 11 09:07:31.716: INFO: Pod "pod-eca5c819-90ea-4e43-b0b8-f804f9c0f6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.927765683s
Aug 11 09:07:33.719: INFO: Pod "pod-eca5c819-90ea-4e43-b0b8-f804f9c0f6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.931555295s
Aug 11 09:07:36.106: INFO: Pod "pod-eca5c819-90ea-4e43-b0b8-f804f9c0f6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.317758457s
Aug 11 09:07:38.110: INFO: Pod "pod-eca5c819-90ea-4e43-b0b8-f804f9c0f6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.321985085s
Aug 11 09:07:40.255: INFO: Pod "pod-eca5c819-90ea-4e43-b0b8-f804f9c0f6c0": Phase="Running", Reason="", readiness=true. Elapsed: 13.466950631s
Aug 11 09:07:42.258: INFO: Pod "pod-eca5c819-90ea-4e43-b0b8-f804f9c0f6c0": Phase="Running", Reason="", readiness=true. Elapsed: 15.470449721s
Aug 11 09:07:44.722: INFO: Pod "pod-eca5c819-90ea-4e43-b0b8-f804f9c0f6c0": Phase="Running", Reason="", readiness=true. Elapsed: 17.933727274s
Aug 11 09:07:46.726: INFO: Pod "pod-eca5c819-90ea-4e43-b0b8-f804f9c0f6c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.938432753s
STEP: Saw pod success
Aug 11 09:07:46.726: INFO: Pod "pod-eca5c819-90ea-4e43-b0b8-f804f9c0f6c0" satisfied condition "success or failure"
Aug 11 09:07:46.730: INFO: Trying to get logs from node iruya-worker2 pod pod-eca5c819-90ea-4e43-b0b8-f804f9c0f6c0 container test-container: 
STEP: delete the pod
Aug 11 09:07:47.538: INFO: Waiting for pod pod-eca5c819-90ea-4e43-b0b8-f804f9c0f6c0 to disappear
Aug 11 09:07:47.614: INFO: Pod pod-eca5c819-90ea-4e43-b0b8-f804f9c0f6c0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:07:47.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6975" for this suite.
Aug 11 09:07:53.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:07:53.963: INFO: namespace emptydir-6975 deletion completed in 6.345979688s

• [SLOW TEST:28.078 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:07:53.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-9c05df9e-af0d-4152-9396-7035d5ddaa87
STEP: Creating a pod to test consume configMaps
Aug 11 09:07:54.754: INFO: Waiting up to 5m0s for pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf" in namespace "configmap-3596" to be "success or failure"
Aug 11 09:07:55.816: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 1.061664801s
Aug 11 09:07:57.820: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.065615133s
Aug 11 09:07:59.824: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.069720258s
Aug 11 09:08:01.828: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.073810631s
Aug 11 09:08:04.350: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.59549029s
Aug 11 09:08:07.256: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.501992651s
Aug 11 09:08:09.260: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.505451811s
Aug 11 09:08:12.836: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.08144177s
Aug 11 09:08:14.840: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.085806866s
Aug 11 09:08:18.040: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 23.285980621s
Aug 11 09:08:20.044: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 25.289806999s
Aug 11 09:08:22.047: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 27.293247498s
Aug 11 09:08:24.256: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 29.501535347s
Aug 11 09:08:26.259: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 31.505097613s
Aug 11 09:08:28.568: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 33.813735534s
Aug 11 09:08:31.374: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Pending", Reason="", readiness=false. Elapsed: 36.619687412s
Aug 11 09:08:33.377: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.623025766s
STEP: Saw pod success
Aug 11 09:08:33.377: INFO: Pod "pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf" satisfied condition "success or failure"
Aug 11 09:08:33.379: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf container configmap-volume-test: 
STEP: delete the pod
Aug 11 09:08:34.349: INFO: Waiting for pod pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf to disappear
Aug 11 09:08:34.909: INFO: Pod pod-configmaps-38240308-ade1-4b30-8dc9-19f6d50bbdcf no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:08:34.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3596" for this suite.
Aug 11 09:08:41.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:08:41.941: INFO: namespace configmap-3596 deletion completed in 7.027286596s

• [SLOW TEST:47.978 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:08:41.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-68360987-aed7-4040-bdc8-d90c3ad038c6
STEP: Creating a pod to test consume secrets
Aug 11 09:08:42.385: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-487236e4-371f-4bb3-a70f-f510bfeb8d87" in namespace "projected-4551" to be "success or failure"
Aug 11 09:08:42.398: INFO: Pod "pod-projected-secrets-487236e4-371f-4bb3-a70f-f510bfeb8d87": Phase="Pending", Reason="", readiness=false. Elapsed: 13.363032ms
Aug 11 09:08:44.402: INFO: Pod "pod-projected-secrets-487236e4-371f-4bb3-a70f-f510bfeb8d87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016517967s
Aug 11 09:08:46.404: INFO: Pod "pod-projected-secrets-487236e4-371f-4bb3-a70f-f510bfeb8d87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019415991s
Aug 11 09:08:48.531: INFO: Pod "pod-projected-secrets-487236e4-371f-4bb3-a70f-f510bfeb8d87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145557822s
Aug 11 09:08:50.534: INFO: Pod "pod-projected-secrets-487236e4-371f-4bb3-a70f-f510bfeb8d87": Phase="Running", Reason="", readiness=true. Elapsed: 8.148513357s
Aug 11 09:08:52.602: INFO: Pod "pod-projected-secrets-487236e4-371f-4bb3-a70f-f510bfeb8d87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.217348876s
STEP: Saw pod success
Aug 11 09:08:52.602: INFO: Pod "pod-projected-secrets-487236e4-371f-4bb3-a70f-f510bfeb8d87" satisfied condition "success or failure"
Aug 11 09:08:52.604: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-487236e4-371f-4bb3-a70f-f510bfeb8d87 container projected-secret-volume-test: 
STEP: delete the pod
Aug 11 09:08:52.670: INFO: Waiting for pod pod-projected-secrets-487236e4-371f-4bb3-a70f-f510bfeb8d87 to disappear
Aug 11 09:08:52.782: INFO: Pod pod-projected-secrets-487236e4-371f-4bb3-a70f-f510bfeb8d87 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:08:52.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4551" for this suite.
Aug 11 09:09:00.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:09:01.435: INFO: namespace projected-4551 deletion completed in 8.648491738s

• [SLOW TEST:19.493 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:09:01.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 11 09:09:02.223: INFO: Waiting up to 5m0s for pod "downward-api-42166825-f852-4b95-85a8-186b5d5e9f7f" in namespace "downward-api-7642" to be "success or failure"
Aug 11 09:09:02.254: INFO: Pod "downward-api-42166825-f852-4b95-85a8-186b5d5e9f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.974012ms
Aug 11 09:09:04.257: INFO: Pod "downward-api-42166825-f852-4b95-85a8-186b5d5e9f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034047846s
Aug 11 09:09:06.260: INFO: Pod "downward-api-42166825-f852-4b95-85a8-186b5d5e9f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037095103s
Aug 11 09:09:08.264: INFO: Pod "downward-api-42166825-f852-4b95-85a8-186b5d5e9f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040799052s
Aug 11 09:09:10.460: INFO: Pod "downward-api-42166825-f852-4b95-85a8-186b5d5e9f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.23738954s
Aug 11 09:09:12.464: INFO: Pod "downward-api-42166825-f852-4b95-85a8-186b5d5e9f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.241038604s
Aug 11 09:09:14.973: INFO: Pod "downward-api-42166825-f852-4b95-85a8-186b5d5e9f7f": Phase="Running", Reason="", readiness=true. Elapsed: 12.749411709s
Aug 11 09:09:17.589: INFO: Pod "downward-api-42166825-f852-4b95-85a8-186b5d5e9f7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.365790855s
STEP: Saw pod success
Aug 11 09:09:17.589: INFO: Pod "downward-api-42166825-f852-4b95-85a8-186b5d5e9f7f" satisfied condition "success or failure"
Aug 11 09:09:17.837: INFO: Trying to get logs from node iruya-worker2 pod downward-api-42166825-f852-4b95-85a8-186b5d5e9f7f container dapi-container: 
STEP: delete the pod
Aug 11 09:09:18.752: INFO: Waiting for pod downward-api-42166825-f852-4b95-85a8-186b5d5e9f7f to disappear
Aug 11 09:09:19.389: INFO: Pod downward-api-42166825-f852-4b95-85a8-186b5d5e9f7f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:09:19.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7642" for this suite.
Aug 11 09:09:29.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:09:30.065: INFO: namespace downward-api-7642 deletion completed in 10.581654672s

• [SLOW TEST:28.630 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:09:30.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 11 09:09:30.903: INFO: Waiting up to 5m0s for pod "pod-f5f7c2c6-5474-4d70-89ba-26fd3da826eb" in namespace "emptydir-9781" to be "success or failure"
Aug 11 09:09:31.071: INFO: Pod "pod-f5f7c2c6-5474-4d70-89ba-26fd3da826eb": Phase="Pending", Reason="", readiness=false. Elapsed: 167.244721ms
Aug 11 09:09:33.073: INFO: Pod "pod-f5f7c2c6-5474-4d70-89ba-26fd3da826eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169903239s
Aug 11 09:09:35.077: INFO: Pod "pod-f5f7c2c6-5474-4d70-89ba-26fd3da826eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174019745s
Aug 11 09:09:37.080: INFO: Pod "pod-f5f7c2c6-5474-4d70-89ba-26fd3da826eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177152724s
Aug 11 09:09:39.083: INFO: Pod "pod-f5f7c2c6-5474-4d70-89ba-26fd3da826eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.179607947s
STEP: Saw pod success
Aug 11 09:09:39.083: INFO: Pod "pod-f5f7c2c6-5474-4d70-89ba-26fd3da826eb" satisfied condition "success or failure"
Aug 11 09:09:39.085: INFO: Trying to get logs from node iruya-worker pod pod-f5f7c2c6-5474-4d70-89ba-26fd3da826eb container test-container: 
STEP: delete the pod
Aug 11 09:09:39.110: INFO: Waiting for pod pod-f5f7c2c6-5474-4d70-89ba-26fd3da826eb to disappear
Aug 11 09:09:39.124: INFO: Pod pod-f5f7c2c6-5474-4d70-89ba-26fd3da826eb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:09:39.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9781" for this suite.
Aug 11 09:09:45.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:09:45.188: INFO: namespace emptydir-9781 deletion completed in 6.060814806s

• [SLOW TEST:15.122 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:09:45.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 11 09:09:45.723: INFO: Waiting up to 5m0s for pod "pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe" in namespace "emptydir-7758" to be "success or failure"
Aug 11 09:09:46.011: INFO: Pod "pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe": Phase="Pending", Reason="", readiness=false. Elapsed: 287.09325ms
Aug 11 09:09:48.610: INFO: Pod "pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.886520601s
Aug 11 09:09:51.299: INFO: Pod "pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe": Phase="Pending", Reason="", readiness=false. Elapsed: 5.575786967s
Aug 11 09:09:53.676: INFO: Pod "pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe": Phase="Pending", Reason="", readiness=false. Elapsed: 7.952583814s
Aug 11 09:09:56.035: INFO: Pod "pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe": Phase="Pending", Reason="", readiness=false. Elapsed: 10.311396915s
Aug 11 09:09:58.699: INFO: Pod "pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe": Phase="Pending", Reason="", readiness=false. Elapsed: 12.97547154s
Aug 11 09:10:00.713: INFO: Pod "pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe": Phase="Pending", Reason="", readiness=false. Elapsed: 14.989603208s
Aug 11 09:10:02.717: INFO: Pod "pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe": Phase="Pending", Reason="", readiness=false. Elapsed: 16.993149389s
Aug 11 09:10:06.245: INFO: Pod "pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe": Phase="Pending", Reason="", readiness=false. Elapsed: 20.521512191s
Aug 11 09:10:08.766: INFO: Pod "pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe": Phase="Pending", Reason="", readiness=false. Elapsed: 23.042549853s
Aug 11 09:10:10.892: INFO: Pod "pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.168213006s
STEP: Saw pod success
Aug 11 09:10:10.892: INFO: Pod "pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe" satisfied condition "success or failure"
Aug 11 09:10:10.895: INFO: Trying to get logs from node iruya-worker2 pod pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe container test-container: 
STEP: delete the pod
Aug 11 09:10:12.114: INFO: Waiting for pod pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe to disappear
Aug 11 09:10:12.352: INFO: Pod pod-3780fef8-90b9-4c82-a83b-a5c79cfa3afe no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:10:12.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7758" for this suite.
Aug 11 09:10:18.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:10:18.472: INFO: namespace emptydir-7758 deletion completed in 6.11611724s

• [SLOW TEST:33.284 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:10:18.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 11 09:10:18.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2644'
Aug 11 09:10:51.533: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 11 09:10:51.533: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Aug 11 09:10:51.790: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-q665n]
Aug 11 09:10:51.790: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-q665n" in namespace "kubectl-2644" to be "running and ready"
Aug 11 09:10:51.958: INFO: Pod "e2e-test-nginx-rc-q665n": Phase="Pending", Reason="", readiness=false. Elapsed: 167.555233ms
Aug 11 09:10:53.961: INFO: Pod "e2e-test-nginx-rc-q665n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170750685s
Aug 11 09:10:55.964: INFO: Pod "e2e-test-nginx-rc-q665n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173681192s
Aug 11 09:10:57.966: INFO: Pod "e2e-test-nginx-rc-q665n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176137867s
Aug 11 09:10:59.969: INFO: Pod "e2e-test-nginx-rc-q665n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178703675s
Aug 11 09:11:02.602: INFO: Pod "e2e-test-nginx-rc-q665n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.811444828s
Aug 11 09:11:04.902: INFO: Pod "e2e-test-nginx-rc-q665n": Phase="Pending", Reason="", readiness=false. Elapsed: 13.111625166s
Aug 11 09:11:06.904: INFO: Pod "e2e-test-nginx-rc-q665n": Phase="Pending", Reason="", readiness=false. Elapsed: 15.114027661s
Aug 11 09:11:08.908: INFO: Pod "e2e-test-nginx-rc-q665n": Phase="Running", Reason="", readiness=true. Elapsed: 17.11751334s
Aug 11 09:11:08.908: INFO: Pod "e2e-test-nginx-rc-q665n" satisfied condition "running and ready"
Aug 11 09:11:08.908: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-q665n]
Aug 11 09:11:08.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-2644'
Aug 11 09:11:09.019: INFO: stderr: ""
Aug 11 09:11:09.019: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Aug 11 09:11:09.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2644'
Aug 11 09:11:09.159: INFO: stderr: ""
Aug 11 09:11:09.159: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:11:09.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2644" for this suite.
Aug 11 09:11:17.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:11:18.372: INFO: namespace kubectl-2644 deletion completed in 9.209217563s

• [SLOW TEST:59.900 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:11:18.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-4db2feb7-22e9-4464-a2a0-4b733ad228b6
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:11:36.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4129" for this suite.
Aug 11 09:12:08.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:12:09.506: INFO: namespace configmap-4129 deletion completed in 33.342175071s

• [SLOW TEST:51.134 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:12:09.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 11 09:12:10.930: INFO: Waiting up to 5m0s for pod "pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc" in namespace "emptydir-4691" to be "success or failure"
Aug 11 09:12:11.293: INFO: Pod "pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 362.856653ms
Aug 11 09:12:13.312: INFO: Pod "pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382802365s
Aug 11 09:12:15.347: INFO: Pod "pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.417751699s
Aug 11 09:12:19.073: INFO: Pod "pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14380801s
Aug 11 09:12:21.077: INFO: Pod "pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.147617237s
Aug 11 09:12:25.343: INFO: Pod "pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.413092941s
Aug 11 09:12:27.671: INFO: Pod "pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.74158808s
Aug 11 09:12:29.675: INFO: Pod "pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.745089631s
Aug 11 09:12:32.121: INFO: Pod "pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 21.191226524s
Aug 11 09:12:34.124: INFO: Pod "pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 23.194720579s
Aug 11 09:12:36.199: INFO: Pod "pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 25.269424694s
Aug 11 09:12:38.684: INFO: Pod "pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.754230773s
STEP: Saw pod success
Aug 11 09:12:38.684: INFO: Pod "pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc" satisfied condition "success or failure"
Aug 11 09:12:38.687: INFO: Trying to get logs from node iruya-worker pod pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc container test-container: 
STEP: delete the pod
Aug 11 09:12:38.776: INFO: Waiting for pod pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc to disappear
Aug 11 09:12:38.929: INFO: Pod pod-e3b4d112-15c2-456b-9ff9-b2ae469c7bfc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:12:38.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4691" for this suite.
Aug 11 09:12:46.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:12:47.038: INFO: namespace emptydir-4691 deletion completed in 8.104708276s

• [SLOW TEST:37.532 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:12:47.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 11 09:12:47.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-8526'
Aug 11 09:12:47.395: INFO: stderr: ""
Aug 11 09:12:47.395: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Aug 11 09:12:57.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-8526 -o json'
Aug 11 09:12:57.538: INFO: stderr: ""
Aug 11 09:12:57.538: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-11T09:12:47Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-8526\",\n        \"resourceVersion\": \"4162521\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-8526/pods/e2e-test-nginx-pod\",\n        \"uid\": \"53d5c13e-3d9a-4c00-9830-154d9a546072\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-qzsbd\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-qzsbd\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-qzsbd\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-11T09:12:48Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-11T09:12:56Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-11T09:12:56Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-11T09:12:47Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://03e54ac33d5782f803950573a7385c4294bf399fd7af2bb3b6c0c7b70d52f06f\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-11T09:12:55Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.7\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.103\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-11T09:12:48Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 11 09:12:57.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8526'
Aug 11 09:12:58.788: INFO: stderr: ""
Aug 11 09:12:58.788: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Aug 11 09:12:58.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8526'
Aug 11 09:13:23.782: INFO: stderr: ""
Aug 11 09:13:23.782: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:13:23.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8526" for this suite.
Aug 11 09:13:33.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:13:33.899: INFO: namespace kubectl-8526 deletion completed in 10.106852408s

• [SLOW TEST:46.861 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:13:33.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 11 09:13:46.148: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-998 pod-service-account-90572ee3-e9b1-476b-b727-5961ae231dea -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 11 09:13:47.610: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-998 pod-service-account-90572ee3-e9b1-476b-b727-5961ae231dea -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 11 09:13:48.700: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-998 pod-service-account-90572ee3-e9b1-476b-b727-5961ae231dea -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:13:49.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-998" for this suite.
Aug 11 09:13:57.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:13:57.656: INFO: namespace svcaccounts-998 deletion completed in 8.22207088s

• [SLOW TEST:23.758 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:13:57.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 11 09:13:59.838: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:13:59.860: INFO: Number of nodes with available pods: 0
Aug 11 09:13:59.860: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:14:00.865: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:00.869: INFO: Number of nodes with available pods: 0
Aug 11 09:14:00.869: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:14:02.005: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:02.008: INFO: Number of nodes with available pods: 0
Aug 11 09:14:02.008: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:14:03.460: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:04.327: INFO: Number of nodes with available pods: 0
Aug 11 09:14:04.328: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:14:05.808: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:06.178: INFO: Number of nodes with available pods: 0
Aug 11 09:14:06.178: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:14:07.173: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:07.176: INFO: Number of nodes with available pods: 0
Aug 11 09:14:07.177: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:14:07.993: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:08.083: INFO: Number of nodes with available pods: 0
Aug 11 09:14:08.083: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:14:08.864: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:09.087: INFO: Number of nodes with available pods: 0
Aug 11 09:14:09.088: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:14:10.436: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:11.448: INFO: Number of nodes with available pods: 0
Aug 11 09:14:11.448: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:14:11.934: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:12.153: INFO: Number of nodes with available pods: 2
Aug 11 09:14:12.154: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 11 09:14:12.205: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:12.208: INFO: Number of nodes with available pods: 1
Aug 11 09:14:12.208: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:14.773: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:15.984: INFO: Number of nodes with available pods: 1
Aug 11 09:14:15.984: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:17.576: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:18.191: INFO: Number of nodes with available pods: 1
Aug 11 09:14:18.191: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:18.441: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:18.444: INFO: Number of nodes with available pods: 1
Aug 11 09:14:18.444: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:19.215: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:19.218: INFO: Number of nodes with available pods: 1
Aug 11 09:14:19.218: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:21.667: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:23.089: INFO: Number of nodes with available pods: 1
Aug 11 09:14:23.089: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:23.778: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:23.780: INFO: Number of nodes with available pods: 1
Aug 11 09:14:23.780: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:25.079: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:25.440: INFO: Number of nodes with available pods: 1
Aug 11 09:14:25.440: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:27.855: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:29.080: INFO: Number of nodes with available pods: 1
Aug 11 09:14:29.080: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:29.419: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:29.433: INFO: Number of nodes with available pods: 1
Aug 11 09:14:29.433: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:31.094: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:31.160: INFO: Number of nodes with available pods: 1
Aug 11 09:14:31.160: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:31.489: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:31.491: INFO: Number of nodes with available pods: 1
Aug 11 09:14:31.491: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:32.441: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:32.463: INFO: Number of nodes with available pods: 1
Aug 11 09:14:32.463: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:33.326: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:33.343: INFO: Number of nodes with available pods: 1
Aug 11 09:14:33.343: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:34.214: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:34.217: INFO: Number of nodes with available pods: 1
Aug 11 09:14:34.217: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:35.399: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:35.451: INFO: Number of nodes with available pods: 1
Aug 11 09:14:35.451: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:36.212: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:36.215: INFO: Number of nodes with available pods: 1
Aug 11 09:14:36.215: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:37.375: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:37.445: INFO: Number of nodes with available pods: 1
Aug 11 09:14:37.445: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:38.311: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:38.317: INFO: Number of nodes with available pods: 1
Aug 11 09:14:38.317: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 11 09:14:39.551: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 11 09:14:39.595: INFO: Number of nodes with available pods: 2
Aug 11 09:14:39.595: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1732, will wait for the garbage collector to delete the pods
Aug 11 09:14:40.897: INFO: Deleting DaemonSet.extensions daemon-set took: 5.474023ms
Aug 11 09:14:41.197: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.188189ms
Aug 11 09:14:48.454: INFO: Number of nodes with available pods: 0
Aug 11 09:14:48.454: INFO: Number of running nodes: 0, number of available pods: 0
Aug 11 09:14:48.457: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1732/daemonsets","resourceVersion":"4162811"},"items":null}

Aug 11 09:14:48.458: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1732/pods","resourceVersion":"4162811"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:14:48.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1732" for this suite.
Aug 11 09:15:07.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:15:08.161: INFO: namespace daemonsets-1732 deletion completed in 19.690514511s

• [SLOW TEST:70.504 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:15:08.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-6458
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-6458
STEP: Deleting pre-stop pod
Aug 11 09:15:36.030: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:15:36.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-6458" for this suite.
Aug 11 09:16:20.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:16:24.053: INFO: namespace prestop-6458 deletion completed in 47.647463333s

• [SLOW TEST:75.892 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:16:24.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 11 09:16:37.178: INFO: Successfully updated pod "labelsupdate4ff24a85-0050-4b48-8bb4-e2267d088c6e"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:16:39.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7569" for this suite.
Aug 11 09:17:03.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:17:03.588: INFO: namespace downward-api-7569 deletion completed in 24.38351231s

• [SLOW TEST:39.536 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:17:03.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Aug 11 09:17:04.088: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:17:04.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5851" for this suite.
Aug 11 09:17:12.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:17:12.975: INFO: namespace kubectl-5851 deletion completed in 8.44807788s

• [SLOW TEST:9.387 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:17:12.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-200
I0811 09:17:13.869170       6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-200, replica count: 1
I0811 09:17:14.919565       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 09:17:15.919749       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 09:17:16.919966       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 09:17:17.920191       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 09:17:18.920372       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 09:17:19.920547       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 09:17:20.920695       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0811 09:17:21.920900       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 11 09:17:22.909: INFO: Created: latency-svc-fs5rr
Aug 11 09:17:23.257: INFO: Got endpoints: latency-svc-fs5rr [1.236814594s]
Aug 11 09:17:24.240: INFO: Created: latency-svc-2c5b5
Aug 11 09:17:24.279: INFO: Got endpoints: latency-svc-2c5b5 [1.02135937s]
Aug 11 09:17:25.288: INFO: Created: latency-svc-lc9vx
Aug 11 09:17:25.500: INFO: Got endpoints: latency-svc-lc9vx [2.242771229s]
Aug 11 09:17:25.887: INFO: Created: latency-svc-kckqf
Aug 11 09:17:26.204: INFO: Got endpoints: latency-svc-kckqf [2.9468334s]
Aug 11 09:17:26.246: INFO: Created: latency-svc-c4d6c
Aug 11 09:17:26.265: INFO: Got endpoints: latency-svc-c4d6c [3.007060392s]
Aug 11 09:17:26.354: INFO: Created: latency-svc-v4hbt
Aug 11 09:17:26.358: INFO: Got endpoints: latency-svc-v4hbt [3.100400443s]
Aug 11 09:17:26.575: INFO: Created: latency-svc-bhd45
Aug 11 09:17:26.581: INFO: Got endpoints: latency-svc-bhd45 [3.322869555s]
Aug 11 09:17:26.657: INFO: Created: latency-svc-87flh
Aug 11 09:17:26.748: INFO: Got endpoints: latency-svc-87flh [3.490585437s]
Aug 11 09:17:26.753: INFO: Created: latency-svc-vn6zm
Aug 11 09:17:26.764: INFO: Got endpoints: latency-svc-vn6zm [3.50570982s]
Aug 11 09:17:26.801: INFO: Created: latency-svc-9prjg
Aug 11 09:17:26.815: INFO: Got endpoints: latency-svc-9prjg [3.55684654s]
Aug 11 09:17:26.946: INFO: Created: latency-svc-n5t4m
Aug 11 09:17:27.018: INFO: Created: latency-svc-hj5t6
Aug 11 09:17:27.018: INFO: Got endpoints: latency-svc-n5t4m [3.760259139s]
Aug 11 09:17:27.162: INFO: Got endpoints: latency-svc-hj5t6 [3.904159048s]
Aug 11 09:17:27.165: INFO: Created: latency-svc-884qf
Aug 11 09:17:27.175: INFO: Got endpoints: latency-svc-884qf [3.916863441s]
Aug 11 09:17:27.233: INFO: Created: latency-svc-nd7rt
Aug 11 09:17:27.241: INFO: Got endpoints: latency-svc-nd7rt [3.983434904s]
Aug 11 09:17:27.366: INFO: Created: latency-svc-frzhh
Aug 11 09:17:27.403: INFO: Got endpoints: latency-svc-frzhh [4.144825971s]
Aug 11 09:17:27.403: INFO: Created: latency-svc-pm4h9
Aug 11 09:17:27.440: INFO: Got endpoints: latency-svc-pm4h9 [4.181449453s]
Aug 11 09:17:27.569: INFO: Created: latency-svc-zrjjr
Aug 11 09:17:27.795: INFO: Created: latency-svc-rk74s
Aug 11 09:17:27.795: INFO: Got endpoints: latency-svc-zrjjr [3.516284915s]
Aug 11 09:17:27.824: INFO: Got endpoints: latency-svc-rk74s [2.323374869s]
Aug 11 09:17:29.640: INFO: Created: latency-svc-6j296
Aug 11 09:17:30.307: INFO: Got endpoints: latency-svc-6j296 [4.102320911s]
Aug 11 09:17:32.302: INFO: Created: latency-svc-qqv8g
Aug 11 09:17:33.897: INFO: Got endpoints: latency-svc-qqv8g [7.632587497s]
Aug 11 09:17:36.124: INFO: Created: latency-svc-2jps2
Aug 11 09:17:36.604: INFO: Got endpoints: latency-svc-2jps2 [10.245469735s]
Aug 11 09:17:37.895: INFO: Created: latency-svc-hzpgn
Aug 11 09:17:37.949: INFO: Got endpoints: latency-svc-hzpgn [11.367816908s]
Aug 11 09:17:39.068: INFO: Created: latency-svc-mn2rd
Aug 11 09:17:39.070: INFO: Got endpoints: latency-svc-mn2rd [12.321552925s]
Aug 11 09:17:40.900: INFO: Created: latency-svc-hq7pp
Aug 11 09:17:40.918: INFO: Got endpoints: latency-svc-hq7pp [14.154311954s]
Aug 11 09:17:42.686: INFO: Created: latency-svc-4gp5n
Aug 11 09:17:43.060: INFO: Got endpoints: latency-svc-4gp5n [16.245586846s]
Aug 11 09:17:46.475: INFO: Created: latency-svc-dvszn
Aug 11 09:17:46.476: INFO: Created: latency-svc-74jsj
Aug 11 09:17:47.334: INFO: Got endpoints: latency-svc-dvszn [20.315431636s]
Aug 11 09:17:47.334: INFO: Got endpoints: latency-svc-74jsj [20.171723075s]
Aug 11 09:17:47.344: INFO: Created: latency-svc-rgpck
Aug 11 09:17:47.428: INFO: Got endpoints: latency-svc-rgpck [20.252608411s]
Aug 11 09:17:49.499: INFO: Created: latency-svc-tgsfs
Aug 11 09:17:49.505: INFO: Got endpoints: latency-svc-tgsfs [22.263646849s]
Aug 11 09:17:51.128: INFO: Created: latency-svc-8wznl
Aug 11 09:17:51.737: INFO: Got endpoints: latency-svc-8wznl [24.334467845s]
Aug 11 09:17:52.199: INFO: Created: latency-svc-mwpjt
Aug 11 09:17:52.654: INFO: Got endpoints: latency-svc-mwpjt [25.21456258s]
Aug 11 09:17:53.042: INFO: Created: latency-svc-k78fj
Aug 11 09:17:53.087: INFO: Got endpoints: latency-svc-k78fj [25.29190431s]
Aug 11 09:17:53.481: INFO: Created: latency-svc-d9dcp
Aug 11 09:17:53.489: INFO: Got endpoints: latency-svc-d9dcp [25.664674725s]
Aug 11 09:17:53.762: INFO: Created: latency-svc-r4j79
Aug 11 09:17:53.770: INFO: Got endpoints: latency-svc-r4j79 [23.463251152s]
Aug 11 09:17:53.807: INFO: Created: latency-svc-87dmn
Aug 11 09:17:54.360: INFO: Got endpoints: latency-svc-87dmn [20.462184243s]
Aug 11 09:17:54.365: INFO: Created: latency-svc-fqkv9
Aug 11 09:17:54.635: INFO: Got endpoints: latency-svc-fqkv9 [18.031639922s]
Aug 11 09:17:55.055: INFO: Created: latency-svc-n82gj
Aug 11 09:17:55.432: INFO: Got endpoints: latency-svc-n82gj [17.483119472s]
Aug 11 09:17:55.438: INFO: Created: latency-svc-sdf9x
Aug 11 09:17:55.731: INFO: Got endpoints: latency-svc-sdf9x [16.660696071s]
Aug 11 09:17:55.775: INFO: Created: latency-svc-z7km8
Aug 11 09:17:56.031: INFO: Got endpoints: latency-svc-z7km8 [15.113256668s]
Aug 11 09:17:56.033: INFO: Created: latency-svc-jljzw
Aug 11 09:17:56.040: INFO: Got endpoints: latency-svc-jljzw [12.979038234s]
Aug 11 09:17:56.649: INFO: Created: latency-svc-m5mbz
Aug 11 09:17:56.911: INFO: Got endpoints: latency-svc-m5mbz [9.576651188s]
Aug 11 09:17:56.913: INFO: Created: latency-svc-g8qd2
Aug 11 09:17:56.925: INFO: Got endpoints: latency-svc-g8qd2 [9.591448393s]
Aug 11 09:17:56.976: INFO: Created: latency-svc-cb25j
Aug 11 09:17:57.002: INFO: Got endpoints: latency-svc-cb25j [9.574668102s]
Aug 11 09:17:57.086: INFO: Created: latency-svc-ltdkj
Aug 11 09:17:57.116: INFO: Got endpoints: latency-svc-ltdkj [7.610568729s]
Aug 11 09:17:57.290: INFO: Created: latency-svc-s6tpw
Aug 11 09:17:57.293: INFO: Got endpoints: latency-svc-s6tpw [5.555580755s]
Aug 11 09:17:57.358: INFO: Created: latency-svc-rznnp
Aug 11 09:17:57.368: INFO: Got endpoints: latency-svc-rznnp [4.713896719s]
Aug 11 09:17:57.456: INFO: Created: latency-svc-w7x4s
Aug 11 09:17:57.464: INFO: Got endpoints: latency-svc-w7x4s [4.376966736s]
Aug 11 09:17:57.489: INFO: Created: latency-svc-26fw5
Aug 11 09:17:57.513: INFO: Got endpoints: latency-svc-26fw5 [4.023875908s]
Aug 11 09:17:57.644: INFO: Created: latency-svc-kzd72
Aug 11 09:17:57.657: INFO: Got endpoints: latency-svc-kzd72 [3.886251985s]
Aug 11 09:17:57.681: INFO: Created: latency-svc-gmwvj
Aug 11 09:17:57.699: INFO: Got endpoints: latency-svc-gmwvj [3.339171472s]
Aug 11 09:17:57.729: INFO: Created: latency-svc-zq8s2
Aug 11 09:17:57.862: INFO: Got endpoints: latency-svc-zq8s2 [3.226963958s]
Aug 11 09:17:57.864: INFO: Created: latency-svc-mvj7h
Aug 11 09:17:57.892: INFO: Got endpoints: latency-svc-mvj7h [2.459635875s]
Aug 11 09:17:58.078: INFO: Created: latency-svc-m7g85
Aug 11 09:17:58.247: INFO: Got endpoints: latency-svc-m7g85 [2.516145318s]
Aug 11 09:17:58.247: INFO: Created: latency-svc-mzgqp
Aug 11 09:17:58.289: INFO: Got endpoints: latency-svc-mzgqp [2.257704584s]
Aug 11 09:17:58.427: INFO: Created: latency-svc-tm7jp
Aug 11 09:17:58.450: INFO: Got endpoints: latency-svc-tm7jp [2.410254791s]
Aug 11 09:17:58.595: INFO: Created: latency-svc-54f6v
Aug 11 09:17:58.630: INFO: Got endpoints: latency-svc-54f6v [1.719140624s]
Aug 11 09:17:58.719: INFO: Created: latency-svc-8qw5g
Aug 11 09:17:58.738: INFO: Got endpoints: latency-svc-8qw5g [1.812857473s]
Aug 11 09:17:58.815: INFO: Created: latency-svc-6c2sq
Aug 11 09:17:58.953: INFO: Got endpoints: latency-svc-6c2sq [1.95081542s]
Aug 11 09:17:58.983: INFO: Created: latency-svc-p982t
Aug 11 09:17:59.020: INFO: Got endpoints: latency-svc-p982t [1.904417311s]
Aug 11 09:17:59.134: INFO: Created: latency-svc-q66wg
Aug 11 09:17:59.200: INFO: Got endpoints: latency-svc-q66wg [1.906878148s]
Aug 11 09:17:59.343: INFO: Created: latency-svc-wgp4c
Aug 11 09:17:59.363: INFO: Got endpoints: latency-svc-wgp4c [1.994440832s]
Aug 11 09:18:00.052: INFO: Created: latency-svc-xkz25
Aug 11 09:18:00.540: INFO: Got endpoints: latency-svc-xkz25 [3.075550515s]
Aug 11 09:18:00.592: INFO: Created: latency-svc-mmvdr
Aug 11 09:18:00.827: INFO: Got endpoints: latency-svc-mmvdr [3.314343764s]
Aug 11 09:18:01.127: INFO: Created: latency-svc-2tgxb
Aug 11 09:18:01.348: INFO: Got endpoints: latency-svc-2tgxb [3.691443447s]
Aug 11 09:18:01.348: INFO: Created: latency-svc-qxdsq
Aug 11 09:18:01.350: INFO: Got endpoints: latency-svc-qxdsq [3.65099186s]
Aug 11 09:18:01.932: INFO: Created: latency-svc-tg9zl
Aug 11 09:18:01.993: INFO: Got endpoints: latency-svc-tg9zl [4.130576674s]
Aug 11 09:18:02.151: INFO: Created: latency-svc-4n6jx
Aug 11 09:18:02.468: INFO: Got endpoints: latency-svc-4n6jx [4.576229234s]
Aug 11 09:18:02.471: INFO: Created: latency-svc-8sl64
Aug 11 09:18:02.538: INFO: Got endpoints: latency-svc-8sl64 [4.291280005s]
Aug 11 09:18:02.911: INFO: Created: latency-svc-d8t9w
Aug 11 09:18:03.363: INFO: Got endpoints: latency-svc-d8t9w [5.074223646s]
Aug 11 09:18:03.426: INFO: Created: latency-svc-45t68
Aug 11 09:18:03.594: INFO: Created: latency-svc-8kf6t
Aug 11 09:18:03.594: INFO: Got endpoints: latency-svc-45t68 [5.143794676s]
Aug 11 09:18:03.663: INFO: Created: latency-svc-2jwnr
Aug 11 09:18:03.663: INFO: Got endpoints: latency-svc-8kf6t [5.033224045s]
Aug 11 09:18:03.684: INFO: Got endpoints: latency-svc-2jwnr [4.946070148s]
Aug 11 09:18:03.815: INFO: Created: latency-svc-27952
Aug 11 09:18:03.871: INFO: Got endpoints: latency-svc-27952 [4.917360343s]
Aug 11 09:18:04.357: INFO: Created: latency-svc-6zf9k
Aug 11 09:18:04.875: INFO: Got endpoints: latency-svc-6zf9k [5.854663891s]
Aug 11 09:18:04.943: INFO: Created: latency-svc-q5fm6
Aug 11 09:18:05.187: INFO: Got endpoints: latency-svc-q5fm6 [5.986781645s]
Aug 11 09:18:05.713: INFO: Created: latency-svc-r7lw8
Aug 11 09:18:06.564: INFO: Got endpoints: latency-svc-r7lw8 [7.201197076s]
Aug 11 09:18:06.567: INFO: Created: latency-svc-f5zv4
Aug 11 09:18:06.616: INFO: Got endpoints: latency-svc-f5zv4 [6.075817686s]
Aug 11 09:18:06.888: INFO: Created: latency-svc-h25bk
Aug 11 09:18:06.938: INFO: Got endpoints: latency-svc-h25bk [6.111221644s]
Aug 11 09:18:07.146: INFO: Created: latency-svc-t4c2h
Aug 11 09:18:07.229: INFO: Got endpoints: latency-svc-t4c2h [5.881260193s]
Aug 11 09:18:07.985: INFO: Created: latency-svc-td2ww
Aug 11 09:18:08.035: INFO: Got endpoints: latency-svc-td2ww [6.684953655s]
Aug 11 09:18:08.414: INFO: Created: latency-svc-2pt9d
Aug 11 09:18:08.437: INFO: Got endpoints: latency-svc-2pt9d [6.444014401s]
Aug 11 09:18:08.785: INFO: Created: latency-svc-k8kjh
Aug 11 09:18:09.001: INFO: Got endpoints: latency-svc-k8kjh [6.533525383s]
Aug 11 09:18:09.058: INFO: Created: latency-svc-bzshc
Aug 11 09:18:09.216: INFO: Got endpoints: latency-svc-bzshc [6.677762753s]
Aug 11 09:18:10.082: INFO: Created: latency-svc-2btjv
Aug 11 09:18:10.085: INFO: Got endpoints: latency-svc-2btjv [6.721315628s]
Aug 11 09:18:10.510: INFO: Created: latency-svc-6mvg7
Aug 11 09:18:10.513: INFO: Got endpoints: latency-svc-6mvg7 [6.919479542s]
Aug 11 09:18:11.346: INFO: Created: latency-svc-7bmtv
Aug 11 09:18:11.660: INFO: Got endpoints: latency-svc-7bmtv [7.996628227s]
Aug 11 09:18:11.663: INFO: Created: latency-svc-n8qj4
Aug 11 09:18:11.866: INFO: Got endpoints: latency-svc-n8qj4 [8.18155525s]
Aug 11 09:18:12.504: INFO: Created: latency-svc-bxgb6
Aug 11 09:18:12.508: INFO: Got endpoints: latency-svc-bxgb6 [8.637418189s]
Aug 11 09:18:12.959: INFO: Created: latency-svc-5xmbl
Aug 11 09:18:12.973: INFO: Got endpoints: latency-svc-5xmbl [8.098078885s]
Aug 11 09:18:13.330: INFO: Created: latency-svc-nm7nd
Aug 11 09:18:13.332: INFO: Got endpoints: latency-svc-nm7nd [8.145476934s]
Aug 11 09:18:13.407: INFO: Created: latency-svc-h5lws
Aug 11 09:18:14.010: INFO: Got endpoints: latency-svc-h5lws [7.446183725s]
Aug 11 09:18:14.289: INFO: Created: latency-svc-nzfdm
Aug 11 09:18:14.314: INFO: Got endpoints: latency-svc-nzfdm [7.698605558s]
Aug 11 09:18:14.531: INFO: Created: latency-svc-7kxt2
Aug 11 09:18:14.623: INFO: Got endpoints: latency-svc-7kxt2 [7.684871139s]
Aug 11 09:18:14.668: INFO: Created: latency-svc-h2dhp
Aug 11 09:18:14.721: INFO: Got endpoints: latency-svc-h2dhp [7.491545982s]
Aug 11 09:18:14.820: INFO: Created: latency-svc-bl2xm
Aug 11 09:18:14.860: INFO: Got endpoints: latency-svc-bl2xm [6.824867753s]
Aug 11 09:18:14.935: INFO: Created: latency-svc-z5s6n
Aug 11 09:18:14.937: INFO: Got endpoints: latency-svc-z5s6n [6.500182343s]
Aug 11 09:18:15.091: INFO: Created: latency-svc-q5swj
Aug 11 09:18:15.096: INFO: Got endpoints: latency-svc-q5swj [6.094031122s]
Aug 11 09:18:15.188: INFO: Created: latency-svc-7tqvf
Aug 11 09:18:15.264: INFO: Got endpoints: latency-svc-7tqvf [6.048264876s]
Aug 11 09:18:15.297: INFO: Created: latency-svc-mv5hx
Aug 11 09:18:15.303: INFO: Got endpoints: latency-svc-mv5hx [5.21847399s]
Aug 11 09:18:15.421: INFO: Created: latency-svc-pcrmk
Aug 11 09:18:15.424: INFO: Got endpoints: latency-svc-pcrmk [4.910286088s]
Aug 11 09:18:15.589: INFO: Created: latency-svc-hvn2h
Aug 11 09:18:15.615: INFO: Got endpoints: latency-svc-hvn2h [3.955552706s]
Aug 11 09:18:15.780: INFO: Created: latency-svc-f5mwn
Aug 11 09:18:15.808: INFO: Got endpoints: latency-svc-f5mwn [3.942531944s]
Aug 11 09:18:15.996: INFO: Created: latency-svc-cmjr5
Aug 11 09:18:16.001: INFO: Got endpoints: latency-svc-cmjr5 [385.812242ms]
Aug 11 09:18:16.217: INFO: Created: latency-svc-n4r64
Aug 11 09:18:16.251: INFO: Got endpoints: latency-svc-n4r64 [3.742846459s]
Aug 11 09:18:16.451: INFO: Created: latency-svc-5mnpg
Aug 11 09:18:16.522: INFO: Got endpoints: latency-svc-5mnpg [3.548622611s]
Aug 11 09:18:16.643: INFO: Created: latency-svc-9rwls
Aug 11 09:18:16.646: INFO: Got endpoints: latency-svc-9rwls [3.313570392s]
Aug 11 09:18:16.822: INFO: Created: latency-svc-zdp5h
Aug 11 09:18:16.825: INFO: Got endpoints: latency-svc-zdp5h [2.814914363s]
Aug 11 09:18:18.088: INFO: Created: latency-svc-b8bkk
Aug 11 09:18:18.373: INFO: Got endpoints: latency-svc-b8bkk [4.059030608s]
Aug 11 09:18:18.685: INFO: Created: latency-svc-xch8b
Aug 11 09:18:19.026: INFO: Got endpoints: latency-svc-xch8b [4.402527209s]
Aug 11 09:18:19.146: INFO: Created: latency-svc-sgtt9
Aug 11 09:18:19.390: INFO: Got endpoints: latency-svc-sgtt9 [4.669478897s]
Aug 11 09:18:19.398: INFO: Created: latency-svc-mr257
Aug 11 09:18:19.600: INFO: Got endpoints: latency-svc-mr257 [4.740121811s]
Aug 11 09:18:19.990: INFO: Created: latency-svc-w4gmx
Aug 11 09:18:20.617: INFO: Got endpoints: latency-svc-w4gmx [5.679422028s]
Aug 11 09:18:20.618: INFO: Created: latency-svc-fdbr5
Aug 11 09:18:20.726: INFO: Got endpoints: latency-svc-fdbr5 [5.630905433s]
Aug 11 09:18:20.813: INFO: Created: latency-svc-dsj7c
Aug 11 09:18:21.062: INFO: Got endpoints: latency-svc-dsj7c [5.797846545s]
Aug 11 09:18:21.593: INFO: Created: latency-svc-st8hm
Aug 11 09:18:21.873: INFO: Got endpoints: latency-svc-st8hm [6.569672066s]
Aug 11 09:18:21.873: INFO: Created: latency-svc-h68dj
Aug 11 09:18:22.137: INFO: Got endpoints: latency-svc-h68dj [6.713497589s]
Aug 11 09:18:22.215: INFO: Created: latency-svc-5qngc
Aug 11 09:18:22.295: INFO: Got endpoints: latency-svc-5qngc [6.486876827s]
Aug 11 09:18:22.317: INFO: Created: latency-svc-kjlkk
Aug 11 09:18:22.364: INFO: Got endpoints: latency-svc-kjlkk [6.362498136s]
Aug 11 09:18:22.449: INFO: Created: latency-svc-9kvms
Aug 11 09:18:22.449: INFO: Got endpoints: latency-svc-9kvms [6.19851115s]
Aug 11 09:18:22.535: INFO: Created: latency-svc-8rnnd
Aug 11 09:18:22.582: INFO: Got endpoints: latency-svc-8rnnd [6.059902302s]
Aug 11 09:18:22.739: INFO: Created: latency-svc-cdkvw
Aug 11 09:18:22.744: INFO: Got endpoints: latency-svc-cdkvw [6.098088792s]
Aug 11 09:18:22.882: INFO: Created: latency-svc-8zc4w
Aug 11 09:18:22.933: INFO: Got endpoints: latency-svc-8zc4w [6.107712046s]
Aug 11 09:18:22.935: INFO: Created: latency-svc-rtpsk
Aug 11 09:18:22.958: INFO: Got endpoints: latency-svc-rtpsk [4.584758832s]
Aug 11 09:18:23.043: INFO: Created: latency-svc-8ppq5
Aug 11 09:18:23.048: INFO: Got endpoints: latency-svc-8ppq5 [4.02255718s]
Aug 11 09:18:23.111: INFO: Created: latency-svc-lb26w
Aug 11 09:18:23.133: INFO: Got endpoints: latency-svc-lb26w [3.74214965s]
Aug 11 09:18:23.193: INFO: Created: latency-svc-s5v55
Aug 11 09:18:23.196: INFO: Got endpoints: latency-svc-s5v55 [3.595678447s]
Aug 11 09:18:23.244: INFO: Created: latency-svc-2rl72
Aug 11 09:18:23.289: INFO: Got endpoints: latency-svc-2rl72 [2.671893664s]
Aug 11 09:18:23.401: INFO: Created: latency-svc-dx42s
Aug 11 09:18:23.410: INFO: Got endpoints: latency-svc-dx42s [2.683016784s]
Aug 11 09:18:23.453: INFO: Created: latency-svc-f525n
Aug 11 09:18:23.504: INFO: Got endpoints: latency-svc-f525n [2.44181087s]
Aug 11 09:18:23.540: INFO: Created: latency-svc-7kr86
Aug 11 09:18:23.557: INFO: Got endpoints: latency-svc-7kr86 [1.684003646s]
Aug 11 09:18:23.646: INFO: Created: latency-svc-nf76k
Aug 11 09:18:23.690: INFO: Got endpoints: latency-svc-nf76k [1.552580857s]
Aug 11 09:18:23.727: INFO: Created: latency-svc-ppw75
Aug 11 09:18:23.786: INFO: Got endpoints: latency-svc-ppw75 [1.49050349s]
Aug 11 09:18:23.788: INFO: Created: latency-svc-j458t
Aug 11 09:18:23.797: INFO: Got endpoints: latency-svc-j458t [1.433132899s]
Aug 11 09:18:23.954: INFO: Created: latency-svc-qvdtg
Aug 11 09:18:23.957: INFO: Got endpoints: latency-svc-qvdtg [1.507673792s]
Aug 11 09:18:24.109: INFO: Created: latency-svc-vbpdc
Aug 11 09:18:24.111: INFO: Got endpoints: latency-svc-vbpdc [1.529575456s]
Aug 11 09:18:24.162: INFO: Created: latency-svc-84drd
Aug 11 09:18:24.176: INFO: Got endpoints: latency-svc-84drd [1.431831015s]
Aug 11 09:18:24.206: INFO: Created: latency-svc-m45xh
Aug 11 09:18:24.282: INFO: Got endpoints: latency-svc-m45xh [1.349392825s]
Aug 11 09:18:24.348: INFO: Created: latency-svc-rz2td
Aug 11 09:18:24.362: INFO: Got endpoints: latency-svc-rz2td [1.403944571s]
Aug 11 09:18:24.474: INFO: Created: latency-svc-6k9gz
Aug 11 09:18:24.478: INFO: Got endpoints: latency-svc-6k9gz [1.429281473s]
Aug 11 09:18:24.573: INFO: Created: latency-svc-dpdzz
Aug 11 09:18:24.648: INFO: Got endpoints: latency-svc-dpdzz [1.514721349s]
Aug 11 09:18:24.677: INFO: Created: latency-svc-vs8th
Aug 11 09:18:24.693: INFO: Got endpoints: latency-svc-vs8th [1.496898464s]
Aug 11 09:18:25.432: INFO: Created: latency-svc-8cptk
Aug 11 09:18:25.438: INFO: Got endpoints: latency-svc-8cptk [2.148729042s]
Aug 11 09:18:25.710: INFO: Created: latency-svc-f97d7
Aug 11 09:18:25.747: INFO: Got endpoints: latency-svc-f97d7 [2.337809401s]
Aug 11 09:18:25.905: INFO: Created: latency-svc-9pm2m
Aug 11 09:18:25.908: INFO: Got endpoints: latency-svc-9pm2m [2.403213584s]
Aug 11 09:18:26.870: INFO: Created: latency-svc-rnjc9
Aug 11 09:18:26.880: INFO: Got endpoints: latency-svc-rnjc9 [3.323041269s]
Aug 11 09:18:27.362: INFO: Created: latency-svc-rkrsr
Aug 11 09:18:27.407: INFO: Got endpoints: latency-svc-rkrsr [3.716831627s]
Aug 11 09:18:27.522: INFO: Created: latency-svc-b9rb6
Aug 11 09:18:27.976: INFO: Got endpoints: latency-svc-b9rb6 [4.190156645s]
Aug 11 09:18:28.346: INFO: Created: latency-svc-z9qnx
Aug 11 09:18:28.395: INFO: Got endpoints: latency-svc-z9qnx [4.598163233s]
Aug 11 09:18:29.094: INFO: Created: latency-svc-57t7j
Aug 11 09:18:29.265: INFO: Got endpoints: latency-svc-57t7j [5.307633656s]
Aug 11 09:18:30.059: INFO: Created: latency-svc-f5tp8
Aug 11 09:18:30.307: INFO: Got endpoints: latency-svc-f5tp8 [6.195838314s]
Aug 11 09:18:30.318: INFO: Created: latency-svc-g8jxr
Aug 11 09:18:30.540: INFO: Got endpoints: latency-svc-g8jxr [6.363762515s]
Aug 11 09:18:30.560: INFO: Created: latency-svc-7w9qc
Aug 11 09:18:30.607: INFO: Got endpoints: latency-svc-7w9qc [6.324286626s]
Aug 11 09:18:30.828: INFO: Created: latency-svc-qw4rw
Aug 11 09:18:31.271: INFO: Got endpoints: latency-svc-qw4rw [6.908498824s]
Aug 11 09:18:32.242: INFO: Created: latency-svc-m59cp
Aug 11 09:18:32.260: INFO: Got endpoints: latency-svc-m59cp [7.782435313s]
Aug 11 09:18:35.179: INFO: Created: latency-svc-j5qg9
Aug 11 09:18:35.282: INFO: Got endpoints: latency-svc-j5qg9 [10.634831225s]
Aug 11 09:18:36.725: INFO: Created: latency-svc-mmfrr
Aug 11 09:18:36.757: INFO: Got endpoints: latency-svc-mmfrr [12.064639047s]
Aug 11 09:18:37.481: INFO: Created: latency-svc-x2gfz
Aug 11 09:18:37.486: INFO: Got endpoints: latency-svc-x2gfz [12.048735179s]
Aug 11 09:18:37.745: INFO: Created: latency-svc-wd67g
Aug 11 09:18:37.801: INFO: Got endpoints: latency-svc-wd67g [12.053057527s]
Aug 11 09:18:37.996: INFO: Created: latency-svc-d28zq
Aug 11 09:18:38.008: INFO: Got endpoints: latency-svc-d28zq [12.100877482s]
Aug 11 09:18:38.089: INFO: Created: latency-svc-9gl8g
Aug 11 09:18:38.247: INFO: Got endpoints: latency-svc-9gl8g [11.366817372s]
Aug 11 09:18:38.939: INFO: Created: latency-svc-llr8h
Aug 11 09:18:39.206: INFO: Got endpoints: latency-svc-llr8h [11.799668861s]
Aug 11 09:18:39.597: INFO: Created: latency-svc-crdk7
Aug 11 09:18:39.659: INFO: Got endpoints: latency-svc-crdk7 [11.682612245s]
Aug 11 09:18:40.863: INFO: Created: latency-svc-k6n5q
Aug 11 09:18:41.284: INFO: Got endpoints: latency-svc-k6n5q [12.888647948s]
Aug 11 09:18:43.259: INFO: Created: latency-svc-zxzwt
Aug 11 09:18:43.295: INFO: Got endpoints: latency-svc-zxzwt [14.029713426s]
Aug 11 09:18:43.982: INFO: Created: latency-svc-v7vsk
Aug 11 09:18:43.982: INFO: Got endpoints: latency-svc-v7vsk [13.674548104s]
Aug 11 09:18:44.853: INFO: Created: latency-svc-96cpq
Aug 11 09:18:44.898: INFO: Got endpoints: latency-svc-96cpq [14.358669384s]
Aug 11 09:18:45.122: INFO: Created: latency-svc-58hn5
Aug 11 09:18:46.735: INFO: Got endpoints: latency-svc-58hn5 [16.128502396s]
Aug 11 09:18:46.738: INFO: Created: latency-svc-wq84f
Aug 11 09:18:46.774: INFO: Got endpoints: latency-svc-wq84f [15.503070381s]
Aug 11 09:18:48.219: INFO: Created: latency-svc-9whqh
Aug 11 09:18:48.589: INFO: Got endpoints: latency-svc-9whqh [16.328856255s]
Aug 11 09:18:49.033: INFO: Created: latency-svc-r7khc
Aug 11 09:18:49.882: INFO: Got endpoints: latency-svc-r7khc [14.599819556s]
Aug 11 09:18:50.305: INFO: Created: latency-svc-t8ckl
Aug 11 09:18:50.415: INFO: Got endpoints: latency-svc-t8ckl [13.657879385s]
Aug 11 09:18:51.405: INFO: Created: latency-svc-nm8wp
Aug 11 09:18:51.738: INFO: Got endpoints: latency-svc-nm8wp [14.25156296s]
Aug 11 09:18:51.763: INFO: Created: latency-svc-ncr8w
Aug 11 09:18:51.819: INFO: Got endpoints: latency-svc-ncr8w [14.018520241s]
Aug 11 09:18:52.675: INFO: Created: latency-svc-f4xgn
Aug 11 09:18:52.700: INFO: Got endpoints: latency-svc-f4xgn [14.691559229s]
Aug 11 09:18:52.943: INFO: Created: latency-svc-j8bvf
Aug 11 09:18:52.946: INFO: Got endpoints: latency-svc-j8bvf [14.698904195s]
Aug 11 09:18:53.004: INFO: Created: latency-svc-5qc8l
Aug 11 09:18:53.385: INFO: Got endpoints: latency-svc-5qc8l [14.178518916s]
Aug 11 09:18:54.143: INFO: Created: latency-svc-lwbpq
Aug 11 09:18:54.609: INFO: Got endpoints: latency-svc-lwbpq [14.9505532s]
Aug 11 09:18:54.610: INFO: Created: latency-svc-4v96z
Aug 11 09:18:54.660: INFO: Got endpoints: latency-svc-4v96z [13.375588292s]
Aug 11 09:18:54.918: INFO: Created: latency-svc-7qgfp
Aug 11 09:18:54.920: INFO: Got endpoints: latency-svc-7qgfp [11.625626739s]
Aug 11 09:18:55.247: INFO: Created: latency-svc-t7g5v
Aug 11 09:18:55.283: INFO: Got endpoints: latency-svc-t7g5v [11.300620944s]
Aug 11 09:18:56.012: INFO: Created: latency-svc-4nr8h
Aug 11 09:18:56.170: INFO: Got endpoints: latency-svc-4nr8h [11.271375579s]
Aug 11 09:18:57.218: INFO: Created: latency-svc-tsfw2
Aug 11 09:18:57.247: INFO: Got endpoints: latency-svc-tsfw2 [10.512017062s]
Aug 11 09:18:58.443: INFO: Created: latency-svc-whfvv
Aug 11 09:18:58.483: INFO: Got endpoints: latency-svc-whfvv [11.708863016s]
Aug 11 09:18:59.361: INFO: Created: latency-svc-t8428
Aug 11 09:18:59.835: INFO: Got endpoints: latency-svc-t8428 [11.245704049s]
Aug 11 09:19:00.974: INFO: Created: latency-svc-bmtvh
Aug 11 09:19:00.985: INFO: Got endpoints: latency-svc-bmtvh [11.102633715s]
Aug 11 09:19:02.503: INFO: Created: latency-svc-kxhws
Aug 11 09:19:02.685: INFO: Got endpoints: latency-svc-kxhws [12.269186226s]
Aug 11 09:19:04.468: INFO: Created: latency-svc-f86nb
Aug 11 09:19:05.178: INFO: Got endpoints: latency-svc-f86nb [13.439942522s]
Aug 11 09:19:05.454: INFO: Created: latency-svc-kcxnc
Aug 11 09:19:05.478: INFO: Got endpoints: latency-svc-kcxnc [13.658501695s]
Aug 11 09:19:05.852: INFO: Created: latency-svc-7fbs5
Aug 11 09:19:06.367: INFO: Got endpoints: latency-svc-7fbs5 [13.667371166s]
Aug 11 09:19:08.230: INFO: Created: latency-svc-s4gpb
Aug 11 09:19:08.733: INFO: Got endpoints: latency-svc-s4gpb [15.786412826s]
Aug 11 09:19:08.758: INFO: Created: latency-svc-kc8pq
Aug 11 09:19:09.129: INFO: Got endpoints: latency-svc-kc8pq [15.743471356s]
Aug 11 09:19:09.702: INFO: Created: latency-svc-94245
Aug 11 09:19:09.707: INFO: Got endpoints: latency-svc-94245 [15.097550584s]
Aug 11 09:19:10.061: INFO: Created: latency-svc-ks7tq
Aug 11 09:19:10.126: INFO: Got endpoints: latency-svc-ks7tq [15.465915688s]
Aug 11 09:19:10.254: INFO: Created: latency-svc-z2kwg
Aug 11 09:19:10.289: INFO: Got endpoints: latency-svc-z2kwg [15.368313702s]
Aug 11 09:19:10.352: INFO: Created: latency-svc-ksgvh
Aug 11 09:19:10.471: INFO: Got endpoints: latency-svc-ksgvh [15.188108786s]
Aug 11 09:19:10.490: INFO: Created: latency-svc-k8npg
Aug 11 09:19:10.510: INFO: Got endpoints: latency-svc-k8npg [14.340142449s]
Aug 11 09:19:11.068: INFO: Created: latency-svc-tgpps
Aug 11 09:19:11.331: INFO: Got endpoints: latency-svc-tgpps [14.083776965s]
Aug 11 09:19:11.334: INFO: Created: latency-svc-scp56
Aug 11 09:19:11.373: INFO: Got endpoints: latency-svc-scp56 [12.890110621s]
Aug 11 09:19:11.418: INFO: Created: latency-svc-d6gfw
Aug 11 09:19:11.499: INFO: Got endpoints: latency-svc-d6gfw [11.6637131s]
Aug 11 09:19:11.500: INFO: Created: latency-svc-2h7g7
Aug 11 09:19:11.507: INFO: Got endpoints: latency-svc-2h7g7 [10.521781457s]
Aug 11 09:19:11.558: INFO: Created: latency-svc-c4m6w
Aug 11 09:19:11.690: INFO: Got endpoints: latency-svc-c4m6w [9.005590557s]
Aug 11 09:19:11.690: INFO: Latencies: [385.812242ms 1.02135937s 1.349392825s 1.403944571s 1.429281473s 1.431831015s 1.433132899s 1.49050349s 1.496898464s 1.507673792s 1.514721349s 1.529575456s 1.552580857s 1.684003646s 1.719140624s 1.812857473s 1.904417311s 1.906878148s 1.95081542s 1.994440832s 2.148729042s 2.242771229s 2.257704584s 2.323374869s 2.337809401s 2.403213584s 2.410254791s 2.44181087s 2.459635875s 2.516145318s 2.671893664s 2.683016784s 2.814914363s 2.9468334s 3.007060392s 3.075550515s 3.100400443s 3.226963958s 3.313570392s 3.314343764s 3.322869555s 3.323041269s 3.339171472s 3.490585437s 3.50570982s 3.516284915s 3.548622611s 3.55684654s 3.595678447s 3.65099186s 3.691443447s 3.716831627s 3.74214965s 3.742846459s 3.760259139s 3.886251985s 3.904159048s 3.916863441s 3.942531944s 3.955552706s 3.983434904s 4.02255718s 4.023875908s 4.059030608s 4.102320911s 4.130576674s 4.144825971s 4.181449453s 4.190156645s 4.291280005s 4.376966736s 4.402527209s 4.576229234s 4.584758832s 4.598163233s 4.669478897s 4.713896719s 4.740121811s 4.910286088s 4.917360343s 4.946070148s 5.033224045s 5.074223646s 5.143794676s 5.21847399s 5.307633656s 5.555580755s 5.630905433s 5.679422028s 5.797846545s 5.854663891s 5.881260193s 5.986781645s 6.048264876s 6.059902302s 6.075817686s 6.094031122s 6.098088792s 6.107712046s 6.111221644s 6.195838314s 6.19851115s 6.324286626s 6.362498136s 6.363762515s 6.444014401s 6.486876827s 6.500182343s 6.533525383s 6.569672066s 6.677762753s 6.684953655s 6.713497589s 6.721315628s 6.824867753s 6.908498824s 6.919479542s 7.201197076s 7.446183725s 7.491545982s 7.610568729s 7.632587497s 7.684871139s 7.698605558s 7.782435313s 7.996628227s 8.098078885s 8.145476934s 8.18155525s 8.637418189s 9.005590557s 9.574668102s 9.576651188s 9.591448393s 10.245469735s 10.512017062s 10.521781457s 10.634831225s 11.102633715s 11.245704049s 11.271375579s 11.300620944s 11.366817372s 11.367816908s 11.625626739s 11.6637131s 11.682612245s 11.708863016s 11.799668861s 12.048735179s 12.053057527s 12.064639047s 12.100877482s 12.269186226s 12.321552925s 12.888647948s 12.890110621s 12.979038234s 13.375588292s 13.439942522s 13.657879385s 13.658501695s 13.667371166s 13.674548104s 14.018520241s 14.029713426s 14.083776965s 14.154311954s 14.178518916s 14.25156296s 14.340142449s 14.358669384s 14.599819556s 14.691559229s 14.698904195s 14.9505532s 15.097550584s 15.113256668s 15.188108786s 15.368313702s 15.465915688s 15.503070381s 15.743471356s 15.786412826s 16.128502396s 16.245586846s 16.328856255s 16.660696071s 17.483119472s 18.031639922s 20.171723075s 20.252608411s 20.315431636s 20.462184243s 22.263646849s 23.463251152s 24.334467845s 25.21456258s 25.29190431s 25.664674725s]
Aug 11 09:19:11.691: INFO: 50 %ile: 6.195838314s
Aug 11 09:19:11.691: INFO: 90 %ile: 15.465915688s
Aug 11 09:19:11.691: INFO: 99 %ile: 25.29190431s
Aug 11 09:19:11.691: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:19:11.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-200" for this suite.
Aug 11 09:20:51.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:20:51.786: INFO: namespace svc-latency-200 deletion completed in 1m40.093298581s

• [SLOW TEST:218.810 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:20:51.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 11 09:20:52.120: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4d60c46-d641-4e59-8976-1cfc3e708f22" in namespace "downward-api-8311" to be "success or failure"
Aug 11 09:20:52.172: INFO: Pod "downwardapi-volume-f4d60c46-d641-4e59-8976-1cfc3e708f22": Phase="Pending", Reason="", readiness=false. Elapsed: 51.388847ms
Aug 11 09:20:54.370: INFO: Pod "downwardapi-volume-f4d60c46-d641-4e59-8976-1cfc3e708f22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249976664s
Aug 11 09:20:57.022: INFO: Pod "downwardapi-volume-f4d60c46-d641-4e59-8976-1cfc3e708f22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.901645959s
Aug 11 09:20:59.201: INFO: Pod "downwardapi-volume-f4d60c46-d641-4e59-8976-1cfc3e708f22": Phase="Pending", Reason="", readiness=false. Elapsed: 7.080666039s
Aug 11 09:21:01.229: INFO: Pod "downwardapi-volume-f4d60c46-d641-4e59-8976-1cfc3e708f22": Phase="Pending", Reason="", readiness=false. Elapsed: 9.108577367s
Aug 11 09:21:03.351: INFO: Pod "downwardapi-volume-f4d60c46-d641-4e59-8976-1cfc3e708f22": Phase="Running", Reason="", readiness=true. Elapsed: 11.230986856s
Aug 11 09:21:05.441: INFO: Pod "downwardapi-volume-f4d60c46-d641-4e59-8976-1cfc3e708f22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.32093884s
STEP: Saw pod success
Aug 11 09:21:05.441: INFO: Pod "downwardapi-volume-f4d60c46-d641-4e59-8976-1cfc3e708f22" satisfied condition "success or failure"
Aug 11 09:21:05.444: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-f4d60c46-d641-4e59-8976-1cfc3e708f22 container client-container: 
STEP: delete the pod
Aug 11 09:21:07.165: INFO: Waiting for pod downwardapi-volume-f4d60c46-d641-4e59-8976-1cfc3e708f22 to disappear
Aug 11 09:21:07.681: INFO: Pod downwardapi-volume-f4d60c46-d641-4e59-8976-1cfc3e708f22 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:21:07.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8311" for this suite.
Aug 11 09:21:18.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:21:18.430: INFO: namespace downward-api-8311 deletion completed in 10.745641396s

• [SLOW TEST:26.644 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:21:18.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 11 09:21:19.535: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4e7b34d-1b8b-4b02-bcbe-6d3c83886642" in namespace "projected-6496" to be "success or failure"
Aug 11 09:21:19.789: INFO: Pod "downwardapi-volume-f4e7b34d-1b8b-4b02-bcbe-6d3c83886642": Phase="Pending", Reason="", readiness=false. Elapsed: 253.691204ms
Aug 11 09:21:22.441: INFO: Pod "downwardapi-volume-f4e7b34d-1b8b-4b02-bcbe-6d3c83886642": Phase="Pending", Reason="", readiness=false. Elapsed: 2.906135681s
Aug 11 09:21:24.580: INFO: Pod "downwardapi-volume-f4e7b34d-1b8b-4b02-bcbe-6d3c83886642": Phase="Pending", Reason="", readiness=false. Elapsed: 5.045093579s
Aug 11 09:21:26.875: INFO: Pod "downwardapi-volume-f4e7b34d-1b8b-4b02-bcbe-6d3c83886642": Phase="Pending", Reason="", readiness=false. Elapsed: 7.339157802s
Aug 11 09:21:29.058: INFO: Pod "downwardapi-volume-f4e7b34d-1b8b-4b02-bcbe-6d3c83886642": Phase="Pending", Reason="", readiness=false. Elapsed: 9.523050092s
Aug 11 09:21:31.064: INFO: Pod "downwardapi-volume-f4e7b34d-1b8b-4b02-bcbe-6d3c83886642": Phase="Pending", Reason="", readiness=false. Elapsed: 11.528656598s
Aug 11 09:21:33.303: INFO: Pod "downwardapi-volume-f4e7b34d-1b8b-4b02-bcbe-6d3c83886642": Phase="Pending", Reason="", readiness=false. Elapsed: 13.76795492s
Aug 11 09:21:35.306: INFO: Pod "downwardapi-volume-f4e7b34d-1b8b-4b02-bcbe-6d3c83886642": Phase="Running", Reason="", readiness=true. Elapsed: 15.770811529s
Aug 11 09:21:37.676: INFO: Pod "downwardapi-volume-f4e7b34d-1b8b-4b02-bcbe-6d3c83886642": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.140384976s
STEP: Saw pod success
Aug 11 09:21:37.676: INFO: Pod "downwardapi-volume-f4e7b34d-1b8b-4b02-bcbe-6d3c83886642" satisfied condition "success or failure"
Aug 11 09:21:37.678: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f4e7b34d-1b8b-4b02-bcbe-6d3c83886642 container client-container: 
STEP: delete the pod
Aug 11 09:21:38.697: INFO: Waiting for pod downwardapi-volume-f4e7b34d-1b8b-4b02-bcbe-6d3c83886642 to disappear
Aug 11 09:21:39.263: INFO: Pod downwardapi-volume-f4e7b34d-1b8b-4b02-bcbe-6d3c83886642 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:21:39.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6496" for this suite.
Aug 11 09:21:49.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:21:49.702: INFO: namespace projected-6496 deletion completed in 10.433884483s

• [SLOW TEST:31.271 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:21:49.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Aug 11 09:21:51.038: INFO: created pod pod-service-account-defaultsa
Aug 11 09:21:51.038: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 11 09:21:51.047: INFO: created pod pod-service-account-mountsa
Aug 11 09:21:51.047: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 11 09:21:51.193: INFO: created pod pod-service-account-nomountsa
Aug 11 09:21:51.193: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 11 09:21:51.251: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 11 09:21:51.251: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 11 09:21:51.425: INFO: created pod pod-service-account-mountsa-mountspec
Aug 11 09:21:51.425: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 11 09:21:51.502: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 11 09:21:51.502: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 11 09:21:52.037: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 11 09:21:52.037: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 11 09:21:52.127: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 11 09:21:52.127: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 11 09:21:52.603: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 11 09:21:52.603: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:21:52.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2082" for this suite.
Aug 11 09:22:34.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:22:34.633: INFO: namespace svcaccounts-2082 deletion completed in 41.472011569s

• [SLOW TEST:44.932 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:22:34.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 11 09:22:34.958: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba899e34-3c45-472c-a54c-6446584e527a" in namespace "projected-5595" to be "success or failure"
Aug 11 09:22:35.043: INFO: Pod "downwardapi-volume-ba899e34-3c45-472c-a54c-6446584e527a": Phase="Pending", Reason="", readiness=false. Elapsed: 84.891603ms
Aug 11 09:22:37.126: INFO: Pod "downwardapi-volume-ba899e34-3c45-472c-a54c-6446584e527a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168614765s
Aug 11 09:22:39.130: INFO: Pod "downwardapi-volume-ba899e34-3c45-472c-a54c-6446584e527a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171812044s
Aug 11 09:22:41.133: INFO: Pod "downwardapi-volume-ba899e34-3c45-472c-a54c-6446584e527a": Phase="Running", Reason="", readiness=true. Elapsed: 6.17480192s
Aug 11 09:22:43.136: INFO: Pod "downwardapi-volume-ba899e34-3c45-472c-a54c-6446584e527a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.17856448s
STEP: Saw pod success
Aug 11 09:22:43.136: INFO: Pod "downwardapi-volume-ba899e34-3c45-472c-a54c-6446584e527a" satisfied condition "success or failure"
Aug 11 09:22:43.139: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ba899e34-3c45-472c-a54c-6446584e527a container client-container: 
STEP: delete the pod
Aug 11 09:22:43.307: INFO: Waiting for pod downwardapi-volume-ba899e34-3c45-472c-a54c-6446584e527a to disappear
Aug 11 09:22:43.342: INFO: Pod downwardapi-volume-ba899e34-3c45-472c-a54c-6446584e527a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:22:43.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5595" for this suite.
Aug 11 09:22:51.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:22:51.545: INFO: namespace projected-5595 deletion completed in 8.198941073s

• [SLOW TEST:16.912 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:22:51.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 09:22:52.175: INFO: Creating deployment "nginx-deployment"
Aug 11 09:22:52.180: INFO: Waiting for observed generation 1
Aug 11 09:22:54.548: INFO: Waiting for all required pods to come up
Aug 11 09:22:54.553: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 11 09:23:14.713: INFO: Waiting for deployment "nginx-deployment" to complete
Aug 11 09:23:14.719: INFO: Updating deployment "nginx-deployment" with a non-existent image
Aug 11 09:23:14.725: INFO: Updating deployment nginx-deployment
Aug 11 09:23:14.725: INFO: Waiting for observed generation 2
Aug 11 09:23:17.096: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 11 09:23:17.102: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 11 09:23:17.383: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug 11 09:23:17.470: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 11 09:23:17.470: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 11 09:23:17.472: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug 11 09:23:17.476: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Aug 11 09:23:17.476: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Aug 11 09:23:17.482: INFO: Updating deployment nginx-deployment
Aug 11 09:23:17.482: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Aug 11 09:23:18.255: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 11 09:23:18.527: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 11 09:23:20.885: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-9793,SelfLink:/apis/apps/v1/namespaces/deployment-9793/deployments/nginx-deployment,UID:7c65ca3c-7d77-4010-81b3-22d5d20a8b2a,ResourceVersion:4165357,Generation:3,CreationTimestamp:2020-08-11 09:22:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-08-11 09:23:16 +0000 UTC 2020-08-11 09:22:52 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-08-11 09:23:18 +0000 UTC 2020-08-11 09:23:18 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Aug 11 09:23:21.559: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-9793,SelfLink:/apis/apps/v1/namespaces/deployment-9793/replicasets/nginx-deployment-55fb7cb77f,UID:a3a06b36-96b0-47a8-842e-4d4201343a8f,ResourceVersion:4165398,Generation:3,CreationTimestamp:2020-08-11 09:23:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7c65ca3c-7d77-4010-81b3-22d5d20a8b2a 0xc002fae737 0xc002fae738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 11 09:23:21.559: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Aug 11 09:23:21.559: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-9793,SelfLink:/apis/apps/v1/namespaces/deployment-9793/replicasets/nginx-deployment-7b8c6f4498,UID:afd49c02-aea1-4711-87b9-55b62306bdb0,ResourceVersion:4165379,Generation:3,CreationTimestamp:2020-08-11 09:22:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7c65ca3c-7d77-4010-81b3-22d5d20a8b2a 0xc002fae807 0xc002fae808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Aug 11 09:23:22.279: INFO: Pod "nginx-deployment-55fb7cb77f-4glkm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4glkm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-55fb7cb77f-4glkm,UID:9fa055fa-1925-4dba-be5f-fe9784fa91d2,ResourceVersion:4165331,Generation:0,CreationTimestamp:2020-08-11 09:23:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a3a06b36-96b0-47a8-842e-4d4201343a8f 0xc002faf187 0xc002faf188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002faf200} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002faf220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-08-11 09:23:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.279: INFO: Pod "nginx-deployment-55fb7cb77f-7t6gp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7t6gp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-55fb7cb77f-7t6gp,UID:c691fb46-9912-48ba-be2a-b9674231b51b,ResourceVersion:4165355,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a3a06b36-96b0-47a8-842e-4d4201343a8f 0xc002faf2f0 0xc002faf2f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002faf370} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002faf390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.280: INFO: Pod "nginx-deployment-55fb7cb77f-9gpwm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9gpwm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-55fb7cb77f-9gpwm,UID:945c1883-8883-4ecc-9eb9-c27b36bbc000,ResourceVersion:4165381,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a3a06b36-96b0-47a8-842e-4d4201343a8f 0xc002faf417 0xc002faf418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002faf490} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002faf4b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.280: INFO: Pod "nginx-deployment-55fb7cb77f-bkcdz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bkcdz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-55fb7cb77f-bkcdz,UID:82f112f0-a994-46f6-8226-f8cfa8c04bd2,ResourceVersion:4165393,Generation:0,CreationTimestamp:2020-08-11 09:23:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a3a06b36-96b0-47a8-842e-4d4201343a8f 0xc002faf537 0xc002faf538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002faf5b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002faf5d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.280: INFO: Pod "nginx-deployment-55fb7cb77f-brphz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-brphz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-55fb7cb77f-brphz,UID:ac128239-ba63-4bf2-ad0b-ff62cd3450c3,ResourceVersion:4165383,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a3a06b36-96b0-47a8-842e-4d4201343a8f 0xc002faf657 0xc002faf658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002faf6d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002faf6f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.280: INFO: Pod "nginx-deployment-55fb7cb77f-m8nc5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m8nc5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-55fb7cb77f-m8nc5,UID:41c1b521-4cab-477a-89df-27325212163b,ResourceVersion:4165310,Generation:0,CreationTimestamp:2020-08-11 09:23:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a3a06b36-96b0-47a8-842e-4d4201343a8f 0xc002faf777 0xc002faf778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002faf7f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002faf810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-11 09:23:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.280: INFO: Pod "nginx-deployment-55fb7cb77f-mm9vr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mm9vr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-55fb7cb77f-mm9vr,UID:2ed6a296-b180-41a5-a388-af6b441f6e9b,ResourceVersion:4165363,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a3a06b36-96b0-47a8-842e-4d4201343a8f 0xc002faf8e0 0xc002faf8e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002faf960} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002faf980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.280: INFO: Pod "nginx-deployment-55fb7cb77f-qgf26" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qgf26,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-55fb7cb77f-qgf26,UID:eb55dd06-7a09-49c2-9c3a-acaf43fec51b,ResourceVersion:4165304,Generation:0,CreationTimestamp:2020-08-11 09:23:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a3a06b36-96b0-47a8-842e-4d4201343a8f 0xc002fafa07 0xc002fafa08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fafa80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fafaa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-08-11 09:23:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.281: INFO: Pod "nginx-deployment-55fb7cb77f-rc88m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rc88m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-55fb7cb77f-rc88m,UID:afeb2247-b787-4b4d-8fa2-42470c4d9bf9,ResourceVersion:4165386,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a3a06b36-96b0-47a8-842e-4d4201343a8f 0xc002fafb70 0xc002fafb71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fafbf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fafc10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.281: INFO: Pod "nginx-deployment-55fb7cb77f-rdgnm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rdgnm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-55fb7cb77f-rdgnm,UID:29a5516f-7477-4833-b694-6869216a2eb8,ResourceVersion:4165333,Generation:0,CreationTimestamp:2020-08-11 09:23:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a3a06b36-96b0-47a8-842e-4d4201343a8f 0xc002fafca7 0xc002fafca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fafd20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fafd40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-11 09:23:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.281: INFO: Pod "nginx-deployment-55fb7cb77f-sbdcd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sbdcd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-55fb7cb77f-sbdcd,UID:2de40c04-5c57-478e-a8bc-ab13bb306d41,ResourceVersion:4165319,Generation:0,CreationTimestamp:2020-08-11 09:23:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a3a06b36-96b0-47a8-842e-4d4201343a8f 0xc002fafe10 0xc002fafe11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fafe90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fafed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:15 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-08-11 09:23:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.281: INFO: Pod "nginx-deployment-55fb7cb77f-vwwnx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vwwnx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-55fb7cb77f-vwwnx,UID:5d97016e-1031-4974-9beb-1895dfac503e,ResourceVersion:4165404,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a3a06b36-96b0-47a8-842e-4d4201343a8f 0xc002faffa0 0xc002faffa1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d941f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d94210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:18 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-08-11 09:23:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.281: INFO: Pod "nginx-deployment-55fb7cb77f-z8w2f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z8w2f,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-55fb7cb77f-z8w2f,UID:44e2afec-f3ad-4a0c-ad0e-0b290ba10a69,ResourceVersion:4165384,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a3a06b36-96b0-47a8-842e-4d4201343a8f 0xc002d948a0 0xc002d948a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d94a30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d94a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.281: INFO: Pod "nginx-deployment-7b8c6f4498-4pr6l" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4pr6l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-4pr6l,UID:fa51e77a-bdc8-4971-9e06-8c5f7b2eaae8,ResourceVersion:4165243,Generation:0,CreationTimestamp:2020-08-11 09:22:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc002d94bf7 0xc002d94bf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d94d70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d94e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.118,StartTime:2020-08-11 09:22:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-11 09:23:09 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://54092d29ee35d89f09acc1af2c9edd331eb28b318cffe0b7ff58a8bece20c9fb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.281: INFO: Pod "nginx-deployment-7b8c6f4498-6c5kp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6c5kp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-6c5kp,UID:bbcd8f42-a87e-48c9-b635-48bd17df58e6,ResourceVersion:4165366,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc002d950b7 0xc002d950b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d95260} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d95280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.282: INFO: Pod "nginx-deployment-7b8c6f4498-8rd8r" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8rd8r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-8rd8r,UID:1b7e9128-bd73-49e5-9cf3-5b1b5a787da8,ResourceVersion:4165246,Generation:0,CreationTimestamp:2020-08-11 09:22:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc002d954c7 0xc002d954c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d95670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d95690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.190,StartTime:2020-08-11 09:22:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-11 09:23:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0e4ce4f96f48fffae63f6e91bac12822b05ef9a27bd2657f8ae04f0d771f8c21}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.282: INFO: Pod "nginx-deployment-7b8c6f4498-8tkkq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8tkkq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-8tkkq,UID:364867bc-db7f-4858-80b9-e80984a3eed0,ResourceVersion:4165247,Generation:0,CreationTimestamp:2020-08-11 09:22:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc002d95887 0xc002d95888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d959a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d95ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.117,StartTime:2020-08-11 09:22:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-11 09:23:09 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://de31a2e498e105c2ba18f639ad37e187f54828529e6027a9c9a47d0615be54aa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.282: INFO: Pod "nginx-deployment-7b8c6f4498-d4tz9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d4tz9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-d4tz9,UID:bab66772-62b7-4151-961a-290ce00e986e,ResourceVersion:4165263,Generation:0,CreationTimestamp:2020-08-11 09:22:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc002d95d87 0xc002d95d88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d95e00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d95e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.119,StartTime:2020-08-11 09:22:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-11 09:23:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://44d741bcc96b9724915f61ef299c838f37c2b486a81fb0cee95e71bedcc6d653}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.282: INFO: Pod "nginx-deployment-7b8c6f4498-frhqq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-frhqq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-frhqq,UID:546c7098-682f-4ab6-a911-0a3b54394e58,ResourceVersion:4165380,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc000402057 0xc000402058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0004020d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0004020f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.282: INFO: Pod "nginx-deployment-7b8c6f4498-gp8jm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gp8jm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-gp8jm,UID:26a1c2bf-0856-4d9e-a480-72e0c7e41f49,ResourceVersion:4165388,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc000402177 0xc000402178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0004021f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000402210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.282: INFO: Pod "nginx-deployment-7b8c6f4498-gx766" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gx766,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-gx766,UID:bff0d5ca-902a-43c6-bd53-b5377aee0e74,ResourceVersion:4165258,Generation:0,CreationTimestamp:2020-08-11 09:22:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc000402297 0xc000402298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000402310} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000402330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.116,StartTime:2020-08-11 09:22:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-11 09:23:09 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1616b25bd965ec8b67774c13f322b697453845f914a393557395481d9078866f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.282: INFO: Pod "nginx-deployment-7b8c6f4498-kdmz5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kdmz5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-kdmz5,UID:14368e9b-9482-472c-86ec-d6f3358dcb9e,ResourceVersion:4165387,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc000402497 0xc000402498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0004025f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000402650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.283: INFO: Pod "nginx-deployment-7b8c6f4498-mb8z7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mb8z7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-mb8z7,UID:1670675e-b785-4a6f-ba2d-6d4fc0e58c77,ResourceVersion:4165391,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc000402907 0xc000402908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0004029e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000402a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:18 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-11 09:23:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.283: INFO: Pod "nginx-deployment-7b8c6f4498-n2svf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n2svf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-n2svf,UID:2d68889c-e554-4e61-9171-63183d0201a9,ResourceVersion:4165257,Generation:0,CreationTimestamp:2020-08-11 09:22:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc000402c47 0xc000402c48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000403190} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000403250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.192,StartTime:2020-08-11 09:22:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-11 09:23:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2d11c35591490517eb5ffe0dec9639da3be04be8b7666eaeb69e5e9e857721fa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.283: INFO: Pod "nginx-deployment-7b8c6f4498-n7m4f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n7m4f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-n7m4f,UID:11b30a0f-a9dd-417b-9fea-8123dee75606,ResourceVersion:4165375,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc0004036a7 0xc0004036a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000403790} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000403860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.283: INFO: Pod "nginx-deployment-7b8c6f4498-njwxs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-njwxs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-njwxs,UID:02641eef-1e7f-47ac-bcdf-76914e35253c,ResourceVersion:4165364,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc000403907 0xc000403908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000403980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000403a10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.283: INFO: Pod "nginx-deployment-7b8c6f4498-p2tsp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p2tsp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-p2tsp,UID:841d04fd-eb22-45e4-b6be-4bc0de452fe4,ResourceVersion:4165369,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc000403b07 0xc000403b08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000403b80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000403ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.283: INFO: Pod "nginx-deployment-7b8c6f4498-skh2d" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-skh2d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-skh2d,UID:e28a7555-af9e-421e-b285-ba1df5855634,ResourceVersion:4165377,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc000403e17 0xc000403e18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000403f50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000403f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.283: INFO: Pod "nginx-deployment-7b8c6f4498-src4j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-src4j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-src4j,UID:df0f0e56-cfcd-4146-b2f6-a0171bb8585c,ResourceVersion:4165351,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc0011460f7 0xc0011460f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011461e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001146200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.283: INFO: Pod "nginx-deployment-7b8c6f4498-tb8rx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tb8rx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-tb8rx,UID:1825dd3b-4ec9-4fb1-b8b2-6db98bdc1ff5,ResourceVersion:4165226,Generation:0,CreationTimestamp:2020-08-11 09:22:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc001146317 0xc001146318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011463e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001146470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.115,StartTime:2020-08-11 09:22:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-11 09:23:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ceed138dfd9dc9a56bee7b4e8b1460e815cecc1911609bdddc4b747240c4e18e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.284: INFO: Pod "nginx-deployment-7b8c6f4498-tkl9g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tkl9g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-tkl9g,UID:c3c01794-5a79-4119-88d0-4976e54720c3,ResourceVersion:4165385,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc001146617 0xc001146618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011466f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001146730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:19 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.284: INFO: Pod "nginx-deployment-7b8c6f4498-v58t8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v58t8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-v58t8,UID:facc6624-0c9b-4793-910b-39acc3efa9a5,ResourceVersion:4165262,Generation:0,CreationTimestamp:2020-08-11 09:22:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc0011467f7 0xc0011467f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011468c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001146990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:22:52 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.191,StartTime:2020-08-11 09:22:52 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-11 09:23:09 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5847ad887aa6a104fba918da10550253b9f32d0500d815bdbb4a700440e210db}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 11 09:23:22.284: INFO: Pod "nginx-deployment-7b8c6f4498-vt2r2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vt2r2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9793,SelfLink:/api/v1/namespaces/deployment-9793/pods/nginx-deployment-7b8c6f4498-vt2r2,UID:a346a999-6c0c-4aeb-8f6d-5f8ab16ac6e9,ResourceVersion:4165403,Generation:0,CreationTimestamp:2020-08-11 09:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 afd49c02-aea1-4711-87b9-55b62306bdb0 0xc001146ac7 0xc001146ac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vsjkl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vsjkl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vsjkl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001146ba0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001146bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:23:18 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-11 09:23:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:23:22.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9793" for this suite.
Aug 11 09:24:27.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:24:27.318: INFO: namespace deployment-9793 deletion completed in 1m3.927755969s

• [SLOW TEST:95.773 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:24:27.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 11 09:24:28.459: INFO: Waiting up to 5m0s for pod "downward-api-3e157ae1-691d-43a5-b984-3087193b6d59" in namespace "downward-api-2672" to be "success or failure"
Aug 11 09:24:28.539: INFO: Pod "downward-api-3e157ae1-691d-43a5-b984-3087193b6d59": Phase="Pending", Reason="", readiness=false. Elapsed: 79.299344ms
Aug 11 09:24:30.542: INFO: Pod "downward-api-3e157ae1-691d-43a5-b984-3087193b6d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082739596s
Aug 11 09:24:33.128: INFO: Pod "downward-api-3e157ae1-691d-43a5-b984-3087193b6d59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.66814253s
Aug 11 09:24:35.132: INFO: Pod "downward-api-3e157ae1-691d-43a5-b984-3087193b6d59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.672440736s
Aug 11 09:24:37.367: INFO: Pod "downward-api-3e157ae1-691d-43a5-b984-3087193b6d59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.907510182s
STEP: Saw pod success
Aug 11 09:24:37.367: INFO: Pod "downward-api-3e157ae1-691d-43a5-b984-3087193b6d59" satisfied condition "success or failure"
Aug 11 09:24:37.372: INFO: Trying to get logs from node iruya-worker pod downward-api-3e157ae1-691d-43a5-b984-3087193b6d59 container dapi-container: 
STEP: delete the pod
Aug 11 09:24:37.540: INFO: Waiting for pod downward-api-3e157ae1-691d-43a5-b984-3087193b6d59 to disappear
Aug 11 09:24:37.804: INFO: Pod downward-api-3e157ae1-691d-43a5-b984-3087193b6d59 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:24:37.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2672" for this suite.
Aug 11 09:24:44.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:24:44.521: INFO: namespace downward-api-2672 deletion completed in 6.342926234s

• [SLOW TEST:17.202 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:24:44.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 09:24:44.638: INFO: Creating ReplicaSet my-hostname-basic-e193c309-aa4c-449e-9b55-895c1ad5376c
Aug 11 09:24:44.715: INFO: Pod name my-hostname-basic-e193c309-aa4c-449e-9b55-895c1ad5376c: Found 0 pods out of 1
Aug 11 09:24:49.719: INFO: Pod name my-hostname-basic-e193c309-aa4c-449e-9b55-895c1ad5376c: Found 1 pods out of 1
Aug 11 09:24:49.719: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e193c309-aa4c-449e-9b55-895c1ad5376c" is running
Aug 11 09:24:58.152: INFO: Pod "my-hostname-basic-e193c309-aa4c-449e-9b55-895c1ad5376c-5jjgg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 09:24:46 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 09:24:46 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e193c309-aa4c-449e-9b55-895c1ad5376c]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 09:24:46 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e193c309-aa4c-449e-9b55-895c1ad5376c]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-11 09:24:44 +0000 UTC Reason: Message:}])
Aug 11 09:24:58.152: INFO: Trying to dial the pod
Aug 11 09:25:03.509: INFO: Controller my-hostname-basic-e193c309-aa4c-449e-9b55-895c1ad5376c: Got expected result from replica 1 [my-hostname-basic-e193c309-aa4c-449e-9b55-895c1ad5376c-5jjgg]: "my-hostname-basic-e193c309-aa4c-449e-9b55-895c1ad5376c-5jjgg", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:25:03.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7289" for this suite.
Aug 11 09:25:09.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:25:09.797: INFO: namespace replicaset-7289 deletion completed in 6.284474899s

• [SLOW TEST:25.277 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:25:09.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 09:25:10.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug 11 09:25:11.148: INFO: stderr: ""
Aug 11 09:25:11.148: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-07-09T18:54:28Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:08:45Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:25:11.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4794" for this suite.
Aug 11 09:25:19.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:25:19.706: INFO: namespace kubectl-4794 deletion completed in 8.502857068s

• [SLOW TEST:9.909 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:25:19.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-6d6c096c-8252-4faf-8a3d-3491b5ccd790 in namespace container-probe-4142
Aug 11 09:25:28.018: INFO: Started pod busybox-6d6c096c-8252-4faf-8a3d-3491b5ccd790 in namespace container-probe-4142
STEP: checking the pod's current state and verifying that restartCount is present
Aug 11 09:25:28.638: INFO: Initial restart count of pod busybox-6d6c096c-8252-4faf-8a3d-3491b5ccd790 is 0
Aug 11 09:26:23.686: INFO: Restart count of pod container-probe-4142/busybox-6d6c096c-8252-4faf-8a3d-3491b5ccd790 is now 1 (55.048036925s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:26:23.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4142" for this suite.
Aug 11 09:26:31.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:26:31.979: INFO: namespace container-probe-4142 deletion completed in 8.235252621s

• [SLOW TEST:72.272 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:26:31.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 11 09:26:32.686: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cc9a874-d190-4a9a-86d2-67f06f193db4" in namespace "downward-api-7374" to be "success or failure"
Aug 11 09:26:32.947: INFO: Pod "downwardapi-volume-7cc9a874-d190-4a9a-86d2-67f06f193db4": Phase="Pending", Reason="", readiness=false. Elapsed: 261.155034ms
Aug 11 09:26:35.009: INFO: Pod "downwardapi-volume-7cc9a874-d190-4a9a-86d2-67f06f193db4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323656026s
Aug 11 09:26:37.013: INFO: Pod "downwardapi-volume-7cc9a874-d190-4a9a-86d2-67f06f193db4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32724729s
Aug 11 09:26:39.016: INFO: Pod "downwardapi-volume-7cc9a874-d190-4a9a-86d2-67f06f193db4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.330542075s
Aug 11 09:26:41.118: INFO: Pod "downwardapi-volume-7cc9a874-d190-4a9a-86d2-67f06f193db4": Phase="Running", Reason="", readiness=true. Elapsed: 8.432339894s
Aug 11 09:26:43.121: INFO: Pod "downwardapi-volume-7cc9a874-d190-4a9a-86d2-67f06f193db4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.43505113s
STEP: Saw pod success
Aug 11 09:26:43.121: INFO: Pod "downwardapi-volume-7cc9a874-d190-4a9a-86d2-67f06f193db4" satisfied condition "success or failure"
Aug 11 09:26:43.122: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7cc9a874-d190-4a9a-86d2-67f06f193db4 container client-container: 
STEP: delete the pod
Aug 11 09:26:43.489: INFO: Waiting for pod downwardapi-volume-7cc9a874-d190-4a9a-86d2-67f06f193db4 to disappear
Aug 11 09:26:43.491: INFO: Pod downwardapi-volume-7cc9a874-d190-4a9a-86d2-67f06f193db4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:26:43.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7374" for this suite.
Aug 11 09:26:51.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:26:51.628: INFO: namespace downward-api-7374 deletion completed in 8.13318071s

• [SLOW TEST:19.649 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:26:51.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Aug 11 09:26:52.724: INFO: namespace kubectl-2519
Aug 11 09:26:52.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2519'
Aug 11 09:27:06.180: INFO: stderr: ""
Aug 11 09:27:06.180: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 11 09:27:07.184: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 09:27:07.184: INFO: Found 0 / 1
Aug 11 09:27:08.186: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 09:27:08.186: INFO: Found 0 / 1
Aug 11 09:27:09.202: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 09:27:09.202: INFO: Found 0 / 1
Aug 11 09:27:10.184: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 09:27:10.184: INFO: Found 0 / 1
Aug 11 09:27:11.189: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 09:27:11.189: INFO: Found 0 / 1
Aug 11 09:27:12.183: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 09:27:12.183: INFO: Found 0 / 1
Aug 11 09:27:13.285: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 09:27:13.285: INFO: Found 1 / 1
Aug 11 09:27:13.285: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 11 09:27:13.288: INFO: Selector matched 1 pods for map[app:redis]
Aug 11 09:27:13.288: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 11 09:27:13.288: INFO: wait on redis-master startup in kubectl-2519 
Aug 11 09:27:13.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-66gvl redis-master --namespace=kubectl-2519'
Aug 11 09:27:13.527: INFO: stderr: ""
Aug 11 09:27:13.527: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 11 Aug 09:27:11.863 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Aug 09:27:11.863 # Server started, Redis version 3.2.12\n1:M 11 Aug 09:27:11.863 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Aug 09:27:11.863 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Aug 11 09:27:13.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2519'
Aug 11 09:27:13.690: INFO: stderr: ""
Aug 11 09:27:13.690: INFO: stdout: "service/rm2 exposed\n"
Aug 11 09:27:13.758: INFO: Service rm2 in namespace kubectl-2519 found.
STEP: exposing service
Aug 11 09:27:15.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2519'
Aug 11 09:27:15.937: INFO: stderr: ""
Aug 11 09:27:15.938: INFO: stdout: "service/rm3 exposed\n"
Aug 11 09:27:15.954: INFO: Service rm3 in namespace kubectl-2519 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:27:17.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2519" for this suite.
Aug 11 09:27:44.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:27:44.073: INFO: namespace kubectl-2519 deletion completed in 26.090800724s

• [SLOW TEST:52.444 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:27:44.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-74c3f01d-7f1e-4437-b120-0f8b58cfa1a5
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-74c3f01d-7f1e-4437-b120-0f8b58cfa1a5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:29:18.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8140" for this suite.
Aug 11 09:29:42.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:29:42.126: INFO: namespace projected-8140 deletion completed in 24.093542212s

• [SLOW TEST:118.053 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:29:42.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5887.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5887.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5887.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5887.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5887.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5887.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5887.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5887.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5887.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5887.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5887.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 154.97.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.97.154_udp@PTR;check="$$(dig +tcp +noall +answer +search 154.97.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.97.154_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5887.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5887.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5887.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5887.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5887.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5887.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5887.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5887.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5887.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5887.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5887.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 154.97.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.97.154_udp@PTR;check="$$(dig +tcp +noall +answer +search 154.97.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.97.154_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 11 09:30:00.670: INFO: Unable to read wheezy_udp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:00.673: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:00.674: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:00.676: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:00.688: INFO: Unable to read jessie_udp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:00.690: INFO: Unable to read jessie_tcp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:00.692: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:00.694: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:00.705: INFO: Lookups using dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047 failed for: [wheezy_udp@dns-test-service.dns-5887.svc.cluster.local wheezy_tcp@dns-test-service.dns-5887.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local jessie_udp@dns-test-service.dns-5887.svc.cluster.local jessie_tcp@dns-test-service.dns-5887.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local]

Aug 11 09:30:06.295: INFO: Unable to read wheezy_udp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:06.297: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:06.338: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:06.340: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:06.357: INFO: Unable to read jessie_udp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:06.359: INFO: Unable to read jessie_tcp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:06.360: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:06.362: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:06.375: INFO: Lookups using dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047 failed for: [wheezy_udp@dns-test-service.dns-5887.svc.cluster.local wheezy_tcp@dns-test-service.dns-5887.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local jessie_udp@dns-test-service.dns-5887.svc.cluster.local jessie_tcp@dns-test-service.dns-5887.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local]

Aug 11 09:30:10.708: INFO: Unable to read wheezy_udp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:10.710: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:10.712: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:10.714: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:10.821: INFO: Unable to read jessie_udp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:10.823: INFO: Unable to read jessie_tcp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:10.825: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:10.827: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:10.842: INFO: Lookups using dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047 failed for: [wheezy_udp@dns-test-service.dns-5887.svc.cluster.local wheezy_tcp@dns-test-service.dns-5887.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local jessie_udp@dns-test-service.dns-5887.svc.cluster.local jessie_tcp@dns-test-service.dns-5887.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local]

Aug 11 09:30:15.827: INFO: Unable to read wheezy_udp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:15.830: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:15.833: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:15.835: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:15.851: INFO: Unable to read jessie_udp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:15.852: INFO: Unable to read jessie_tcp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:15.854: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:15.856: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:15.866: INFO: Lookups using dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047 failed for: [wheezy_udp@dns-test-service.dns-5887.svc.cluster.local wheezy_tcp@dns-test-service.dns-5887.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local jessie_udp@dns-test-service.dns-5887.svc.cluster.local jessie_tcp@dns-test-service.dns-5887.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local]

Aug 11 09:30:20.708: INFO: Unable to read wheezy_udp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:20.710: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:20.713: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:20.715: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:20.729: INFO: Unable to read jessie_udp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:20.749: INFO: Unable to read jessie_tcp@dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:20.751: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:20.753: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:20.763: INFO: Lookups using dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047 failed for: [wheezy_udp@dns-test-service.dns-5887.svc.cluster.local wheezy_tcp@dns-test-service.dns-5887.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local jessie_udp@dns-test-service.dns-5887.svc.cluster.local jessie_tcp@dns-test-service.dns-5887.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local]

Aug 11 09:30:25.729: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:25.730: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local from pod dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047: the server could not find the requested resource (get pods dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047)
Aug 11 09:30:25.756: INFO: Lookups using dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047 failed for: [jessie_udp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5887.svc.cluster.local]

Aug 11 09:30:31.230: INFO: DNS probes using dns-5887/dns-test-f223c771-3ac8-429f-b6b3-0179b93c8047 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:30:32.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5887" for this suite.
Aug 11 09:30:40.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:30:40.943: INFO: namespace dns-5887 deletion completed in 8.198966202s

• [SLOW TEST:58.816 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:30:40.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:30:41.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2352" for this suite.
Aug 11 09:30:48.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:30:48.091: INFO: namespace kubelet-test-2352 deletion completed in 6.742052956s

• [SLOW TEST:7.148 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:30:48.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-90
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 11 09:30:49.302: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 11 09:31:28.203: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.212:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-90 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 09:31:28.203: INFO: >>> kubeConfig: /root/.kube/config
I0811 09:31:28.239591       6 log.go:172] (0xc002596b00) (0xc0032a8820) Create stream
I0811 09:31:28.239632       6 log.go:172] (0xc002596b00) (0xc0032a8820) Stream added, broadcasting: 1
I0811 09:31:28.243781       6 log.go:172] (0xc002596b00) Reply frame received for 1
I0811 09:31:28.243822       6 log.go:172] (0xc002596b00) (0xc00219b360) Create stream
I0811 09:31:28.243834       6 log.go:172] (0xc002596b00) (0xc00219b360) Stream added, broadcasting: 3
I0811 09:31:28.244879       6 log.go:172] (0xc002596b00) Reply frame received for 3
I0811 09:31:28.244913       6 log.go:172] (0xc002596b00) (0xc001fb0dc0) Create stream
I0811 09:31:28.244925       6 log.go:172] (0xc002596b00) (0xc001fb0dc0) Stream added, broadcasting: 5
I0811 09:31:28.245723       6 log.go:172] (0xc002596b00) Reply frame received for 5
I0811 09:31:28.352308       6 log.go:172] (0xc002596b00) Data frame received for 5
I0811 09:31:28.352375       6 log.go:172] (0xc001fb0dc0) (5) Data frame handling
I0811 09:31:28.352413       6 log.go:172] (0xc002596b00) Data frame received for 3
I0811 09:31:28.352433       6 log.go:172] (0xc00219b360) (3) Data frame handling
I0811 09:31:28.352465       6 log.go:172] (0xc00219b360) (3) Data frame sent
I0811 09:31:28.352483       6 log.go:172] (0xc002596b00) Data frame received for 3
I0811 09:31:28.352518       6 log.go:172] (0xc00219b360) (3) Data frame handling
I0811 09:31:28.354758       6 log.go:172] (0xc002596b00) Data frame received for 1
I0811 09:31:28.354795       6 log.go:172] (0xc0032a8820) (1) Data frame handling
I0811 09:31:28.354827       6 log.go:172] (0xc0032a8820) (1) Data frame sent
I0811 09:31:28.354866       6 log.go:172] (0xc002596b00) (0xc0032a8820) Stream removed, broadcasting: 1
I0811 09:31:28.354893       6 log.go:172] (0xc002596b00) Go away received
I0811 09:31:28.355026       6 log.go:172] (0xc002596b00) (0xc0032a8820) Stream removed, broadcasting: 1
I0811 09:31:28.355057       6 log.go:172] (0xc002596b00) (0xc00219b360) Stream removed, broadcasting: 3
I0811 09:31:28.355071       6 log.go:172] (0xc002596b00) (0xc001fb0dc0) Stream removed, broadcasting: 5
Aug 11 09:31:28.355: INFO: Found all expected endpoints: [netserver-0]
Aug 11 09:31:28.358: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.135:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-90 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 11 09:31:28.358: INFO: >>> kubeConfig: /root/.kube/config
I0811 09:31:28.391764       6 log.go:172] (0xc001916630) (0xc0011fb7c0) Create stream
I0811 09:31:28.391812       6 log.go:172] (0xc001916630) (0xc0011fb7c0) Stream added, broadcasting: 1
I0811 09:31:28.394381       6 log.go:172] (0xc001916630) Reply frame received for 1
I0811 09:31:28.394445       6 log.go:172] (0xc001916630) (0xc0032a8aa0) Create stream
I0811 09:31:28.394465       6 log.go:172] (0xc001916630) (0xc0032a8aa0) Stream added, broadcasting: 3
I0811 09:31:28.395434       6 log.go:172] (0xc001916630) Reply frame received for 3
I0811 09:31:28.395479       6 log.go:172] (0xc001916630) (0xc00219b400) Create stream
I0811 09:31:28.395497       6 log.go:172] (0xc001916630) (0xc00219b400) Stream added, broadcasting: 5
I0811 09:31:28.396549       6 log.go:172] (0xc001916630) Reply frame received for 5
I0811 09:31:28.464624       6 log.go:172] (0xc001916630) Data frame received for 3
I0811 09:31:28.464663       6 log.go:172] (0xc0032a8aa0) (3) Data frame handling
I0811 09:31:28.464694       6 log.go:172] (0xc0032a8aa0) (3) Data frame sent
I0811 09:31:28.464924       6 log.go:172] (0xc001916630) Data frame received for 5
I0811 09:31:28.464948       6 log.go:172] (0xc001916630) Data frame received for 3
I0811 09:31:28.464971       6 log.go:172] (0xc0032a8aa0) (3) Data frame handling
I0811 09:31:28.464988       6 log.go:172] (0xc00219b400) (5) Data frame handling
I0811 09:31:28.466334       6 log.go:172] (0xc001916630) Data frame received for 1
I0811 09:31:28.466353       6 log.go:172] (0xc0011fb7c0) (1) Data frame handling
I0811 09:31:28.466361       6 log.go:172] (0xc0011fb7c0) (1) Data frame sent
I0811 09:31:28.466373       6 log.go:172] (0xc001916630) (0xc0011fb7c0) Stream removed, broadcasting: 1
I0811 09:31:28.466393       6 log.go:172] (0xc001916630) Go away received
I0811 09:31:28.466507       6 log.go:172] (0xc001916630) (0xc0011fb7c0) Stream removed, broadcasting: 1
I0811 09:31:28.466523       6 log.go:172] (0xc001916630) (0xc0032a8aa0) Stream removed, broadcasting: 3
I0811 09:31:28.466533       6 log.go:172] (0xc001916630) (0xc00219b400) Stream removed, broadcasting: 5
Aug 11 09:31:28.466: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:31:28.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-90" for this suite.
Aug 11 09:31:54.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:31:54.809: INFO: namespace pod-network-test-90 deletion completed in 26.332759973s

• [SLOW TEST:66.717 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:31:54.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 09:31:54.895: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:31:55.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6583" for this suite.
Aug 11 09:32:02.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:32:02.083: INFO: namespace custom-resource-definition-6583 deletion completed in 6.088939316s

• [SLOW TEST:7.274 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:32:02.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9429
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-9429
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9429
Aug 11 09:32:03.775: INFO: Found 0 stateful pods, waiting for 1
Aug 11 09:32:13.781: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 11 09:32:13.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 11 09:32:14.907: INFO: stderr: "I0811 09:32:14.096505    2791 log.go:172] (0xc000a4c630) (0xc000a16820) Create stream\nI0811 09:32:14.096560    2791 log.go:172] (0xc000a4c630) (0xc000a16820) Stream added, broadcasting: 1\nI0811 09:32:14.099427    2791 log.go:172] (0xc000a4c630) Reply frame received for 1\nI0811 09:32:14.099488    2791 log.go:172] (0xc000a4c630) (0xc000a12140) Create stream\nI0811 09:32:14.099516    2791 log.go:172] (0xc000a4c630) (0xc000a12140) Stream added, broadcasting: 3\nI0811 09:32:14.100656    2791 log.go:172] (0xc000a4c630) Reply frame received for 3\nI0811 09:32:14.100694    2791 log.go:172] (0xc000a4c630) (0xc000a12000) Create stream\nI0811 09:32:14.100705    2791 log.go:172] (0xc000a4c630) (0xc000a12000) Stream added, broadcasting: 5\nI0811 09:32:14.101544    2791 log.go:172] (0xc000a4c630) Reply frame received for 5\nI0811 09:32:14.181718    2791 log.go:172] (0xc000a4c630) Data frame received for 5\nI0811 09:32:14.181745    2791 log.go:172] (0xc000a12000) (5) Data frame handling\nI0811 09:32:14.181760    2791 log.go:172] (0xc000a12000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0811 09:32:14.897925    2791 log.go:172] (0xc000a4c630) Data frame received for 3\nI0811 09:32:14.897960    2791 log.go:172] (0xc000a12140) (3) Data frame handling\nI0811 09:32:14.897981    2791 log.go:172] (0xc000a12140) (3) Data frame sent\nI0811 09:32:14.897992    2791 log.go:172] (0xc000a4c630) Data frame received for 3\nI0811 09:32:14.897999    2791 log.go:172] (0xc000a12140) (3) Data frame handling\nI0811 09:32:14.898252    2791 log.go:172] (0xc000a4c630) Data frame received for 5\nI0811 09:32:14.898281    2791 log.go:172] (0xc000a12000) (5) Data frame handling\nI0811 09:32:14.901236    2791 log.go:172] (0xc000a4c630) Data frame received for 1\nI0811 09:32:14.901272    2791 log.go:172] (0xc000a16820) (1) Data frame handling\nI0811 09:32:14.901291    2791 log.go:172] (0xc000a16820) (1) Data frame sent\nI0811 09:32:14.901311    2791 log.go:172] (0xc000a4c630) (0xc000a16820) Stream removed, broadcasting: 1\nI0811 09:32:14.901731    2791 log.go:172] (0xc000a4c630) (0xc000a16820) Stream removed, broadcasting: 1\nI0811 09:32:14.901752    2791 log.go:172] (0xc000a4c630) (0xc000a12140) Stream removed, broadcasting: 3\nI0811 09:32:14.901765    2791 log.go:172] (0xc000a4c630) (0xc000a12000) Stream removed, broadcasting: 5\n"
Aug 11 09:32:14.907: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 11 09:32:14.907: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 11 09:32:14.991: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 11 09:32:24.994: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 09:32:24.995: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 09:32:25.009: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug 11 09:32:25.009: INFO: ss-0  iruya-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  }]
Aug 11 09:32:25.009: INFO: 
Aug 11 09:32:25.009: INFO: StatefulSet ss has not reached scale 3, at 1
Aug 11 09:32:26.033: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993078867s
Aug 11 09:32:27.350: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.969463092s
Aug 11 09:32:28.406: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.652103061s
Aug 11 09:32:29.464: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.596639477s
Aug 11 09:32:30.602: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.538053259s
Aug 11 09:32:31.949: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.399978693s
Aug 11 09:32:32.953: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.053092703s
Aug 11 09:32:34.012: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.049429907s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9429
Aug 11 09:32:35.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:32:35.334: INFO: stderr: "I0811 09:32:35.276628    2810 log.go:172] (0xc0006e0a50) (0xc00051ca00) Create stream\nI0811 09:32:35.276663    2810 log.go:172] (0xc0006e0a50) (0xc00051ca00) Stream added, broadcasting: 1\nI0811 09:32:35.278162    2810 log.go:172] (0xc0006e0a50) Reply frame received for 1\nI0811 09:32:35.278194    2810 log.go:172] (0xc0006e0a50) (0xc000968000) Create stream\nI0811 09:32:35.278239    2810 log.go:172] (0xc0006e0a50) (0xc000968000) Stream added, broadcasting: 3\nI0811 09:32:35.278977    2810 log.go:172] (0xc0006e0a50) Reply frame received for 3\nI0811 09:32:35.279005    2810 log.go:172] (0xc0006e0a50) (0xc0009680a0) Create stream\nI0811 09:32:35.279014    2810 log.go:172] (0xc0006e0a50) (0xc0009680a0) Stream added, broadcasting: 5\nI0811 09:32:35.279658    2810 log.go:172] (0xc0006e0a50) Reply frame received for 5\nI0811 09:32:35.326281    2810 log.go:172] (0xc0006e0a50) Data frame received for 5\nI0811 09:32:35.326304    2810 log.go:172] (0xc0009680a0) (5) Data frame handling\nI0811 09:32:35.326313    2810 log.go:172] (0xc0009680a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0811 09:32:35.326472    2810 log.go:172] (0xc0006e0a50) Data frame received for 3\nI0811 09:32:35.326482    2810 log.go:172] (0xc000968000) (3) Data frame handling\nI0811 09:32:35.326489    2810 log.go:172] (0xc000968000) (3) Data frame sent\nI0811 09:32:35.326495    2810 log.go:172] (0xc0006e0a50) Data frame received for 3\nI0811 09:32:35.326500    2810 log.go:172] (0xc000968000) (3) Data frame handling\nI0811 09:32:35.326538    2810 log.go:172] (0xc0006e0a50) Data frame received for 5\nI0811 09:32:35.326590    2810 log.go:172] (0xc0009680a0) (5) Data frame handling\nI0811 09:32:35.330527    2810 log.go:172] (0xc0006e0a50) Data frame received for 1\nI0811 09:32:35.330543    2810 log.go:172] (0xc00051ca00) (1) Data frame handling\nI0811 09:32:35.330556    2810 log.go:172] (0xc00051ca00) (1) Data frame sent\nI0811 09:32:35.330564    2810 log.go:172] (0xc0006e0a50) (0xc00051ca00) Stream removed, broadcasting: 1\nI0811 09:32:35.330575    2810 log.go:172] (0xc0006e0a50) Go away received\nI0811 09:32:35.330919    2810 log.go:172] (0xc0006e0a50) (0xc00051ca00) Stream removed, broadcasting: 1\nI0811 09:32:35.330944    2810 log.go:172] (0xc0006e0a50) (0xc000968000) Stream removed, broadcasting: 3\nI0811 09:32:35.330957    2810 log.go:172] (0xc0006e0a50) (0xc0009680a0) Stream removed, broadcasting: 5\n"
Aug 11 09:32:35.335: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 11 09:32:35.335: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 11 09:32:35.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:32:35.627: INFO: stderr: "I0811 09:32:35.516837    2828 log.go:172] (0xc000846580) (0xc0007326e0) Create stream\nI0811 09:32:35.516919    2828 log.go:172] (0xc000846580) (0xc0007326e0) Stream added, broadcasting: 1\nI0811 09:32:35.519963    2828 log.go:172] (0xc000846580) Reply frame received for 1\nI0811 09:32:35.519992    2828 log.go:172] (0xc000846580) (0xc00057a780) Create stream\nI0811 09:32:35.520006    2828 log.go:172] (0xc000846580) (0xc00057a780) Stream added, broadcasting: 3\nI0811 09:32:35.520798    2828 log.go:172] (0xc000846580) Reply frame received for 3\nI0811 09:32:35.520832    2828 log.go:172] (0xc000846580) (0xc00057a820) Create stream\nI0811 09:32:35.520842    2828 log.go:172] (0xc000846580) (0xc00057a820) Stream added, broadcasting: 5\nI0811 09:32:35.521540    2828 log.go:172] (0xc000846580) Reply frame received for 5\nI0811 09:32:35.592927    2828 log.go:172] (0xc000846580) Data frame received for 5\nI0811 09:32:35.592949    2828 log.go:172] (0xc00057a820) (5) Data frame handling\nI0811 09:32:35.592962    2828 log.go:172] (0xc00057a820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0811 09:32:35.621657    2828 log.go:172] (0xc000846580) Data frame received for 5\nI0811 09:32:35.621672    2828 log.go:172] (0xc00057a820) (5) Data frame handling\nI0811 09:32:35.621685    2828 log.go:172] (0xc00057a820) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0811 09:32:35.621796    2828 log.go:172] (0xc000846580) Data frame received for 3\nI0811 09:32:35.621805    2828 log.go:172] (0xc00057a780) (3) Data frame handling\nI0811 09:32:35.621812    2828 log.go:172] (0xc00057a780) (3) Data frame sent\nI0811 09:32:35.622006    2828 log.go:172] (0xc000846580) Data frame received for 5\nI0811 09:32:35.622027    2828 log.go:172] (0xc00057a820) (5) Data frame handling\nI0811 09:32:35.622035    2828 log.go:172] (0xc00057a820) (5) Data frame sent\nI0811 09:32:35.622044    2828 log.go:172] (0xc000846580) Data frame received for 5\n+ true\nI0811 09:32:35.622050    2828 log.go:172] (0xc00057a820) (5) Data frame handling\nI0811 09:32:35.622081    2828 log.go:172] (0xc000846580) Data frame received for 3\nI0811 09:32:35.622097    2828 log.go:172] (0xc00057a780) (3) Data frame handling\nI0811 09:32:35.623505    2828 log.go:172] (0xc000846580) Data frame received for 1\nI0811 09:32:35.623517    2828 log.go:172] (0xc0007326e0) (1) Data frame handling\nI0811 09:32:35.623535    2828 log.go:172] (0xc0007326e0) (1) Data frame sent\nI0811 09:32:35.623641    2828 log.go:172] (0xc000846580) (0xc0007326e0) Stream removed, broadcasting: 1\nI0811 09:32:35.623675    2828 log.go:172] (0xc000846580) Go away received\nI0811 09:32:35.623913    2828 log.go:172] (0xc000846580) (0xc0007326e0) Stream removed, broadcasting: 1\nI0811 09:32:35.623927    2828 log.go:172] (0xc000846580) (0xc00057a780) Stream removed, broadcasting: 3\nI0811 09:32:35.623933    2828 log.go:172] (0xc000846580) (0xc00057a820) Stream removed, broadcasting: 5\n"
Aug 11 09:32:35.628: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 11 09:32:35.628: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 11 09:32:35.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:32:35.815: INFO: stderr: "I0811 09:32:35.752151    2847 log.go:172] (0xc000762370) (0xc00093e6e0) Create stream\nI0811 09:32:35.752195    2847 log.go:172] (0xc000762370) (0xc00093e6e0) Stream added, broadcasting: 1\nI0811 09:32:35.754203    2847 log.go:172] (0xc000762370) Reply frame received for 1\nI0811 09:32:35.754236    2847 log.go:172] (0xc000762370) (0xc0002ee280) Create stream\nI0811 09:32:35.754244    2847 log.go:172] (0xc000762370) (0xc0002ee280) Stream added, broadcasting: 3\nI0811 09:32:35.755166    2847 log.go:172] (0xc000762370) Reply frame received for 3\nI0811 09:32:35.755195    2847 log.go:172] (0xc000762370) (0xc0007e6000) Create stream\nI0811 09:32:35.755203    2847 log.go:172] (0xc000762370) (0xc0007e6000) Stream added, broadcasting: 5\nI0811 09:32:35.756014    2847 log.go:172] (0xc000762370) Reply frame received for 5\nI0811 09:32:35.809720    2847 log.go:172] (0xc000762370) Data frame received for 5\nI0811 09:32:35.809753    2847 log.go:172] (0xc0007e6000) (5) Data frame handling\nI0811 09:32:35.809779    2847 log.go:172] (0xc0007e6000) (5) Data frame sent\nI0811 09:32:35.809790    2847 log.go:172] (0xc000762370) Data frame received for 5\nI0811 09:32:35.809798    2847 log.go:172] (0xc0007e6000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0811 09:32:35.809819    2847 log.go:172] (0xc000762370) Data frame received for 3\nI0811 09:32:35.809830    2847 log.go:172] (0xc0002ee280) (3) Data frame handling\nI0811 09:32:35.809843    2847 log.go:172] (0xc0002ee280) (3) Data frame sent\nI0811 09:32:35.809856    2847 log.go:172] (0xc000762370) Data frame received for 3\nI0811 09:32:35.809869    2847 log.go:172] (0xc0002ee280) (3) Data frame handling\nI0811 09:32:35.811198    2847 log.go:172] (0xc000762370) Data frame received for 1\nI0811 09:32:35.811216    2847 log.go:172] (0xc00093e6e0) (1) Data frame handling\nI0811 09:32:35.811226    2847 log.go:172] (0xc00093e6e0) (1) Data frame sent\nI0811 09:32:35.811460    2847 log.go:172] (0xc000762370) (0xc00093e6e0) Stream removed, broadcasting: 1\nI0811 09:32:35.811751    2847 log.go:172] (0xc000762370) (0xc00093e6e0) Stream removed, broadcasting: 1\nI0811 09:32:35.811767    2847 log.go:172] (0xc000762370) (0xc0002ee280) Stream removed, broadcasting: 3\nI0811 09:32:35.811778    2847 log.go:172] (0xc000762370) (0xc0007e6000) Stream removed, broadcasting: 5\n"
Aug 11 09:32:35.815: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 11 09:32:35.815: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 11 09:32:35.818: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Aug 11 09:32:45.822: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 09:32:45.822: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 11 09:32:45.822: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 11 09:32:45.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 11 09:32:46.022: INFO: stderr: "I0811 09:32:45.947944    2866 log.go:172] (0xc0008b64d0) (0xc00086e8c0) Create stream\nI0811 09:32:45.948009    2866 log.go:172] (0xc0008b64d0) (0xc00086e8c0) Stream added, broadcasting: 1\nI0811 09:32:45.952061    2866 log.go:172] (0xc0008b64d0) Reply frame received for 1\nI0811 09:32:45.952101    2866 log.go:172] (0xc0008b64d0) (0xc00086e000) Create stream\nI0811 09:32:45.952115    2866 log.go:172] (0xc0008b64d0) (0xc00086e000) Stream added, broadcasting: 3\nI0811 09:32:45.953127    2866 log.go:172] (0xc0008b64d0) Reply frame received for 3\nI0811 09:32:45.953163    2866 log.go:172] (0xc0008b64d0) (0xc0005b8140) Create stream\nI0811 09:32:45.953180    2866 log.go:172] (0xc0008b64d0) (0xc0005b8140) Stream added, broadcasting: 5\nI0811 09:32:45.954149    2866 log.go:172] (0xc0008b64d0) Reply frame received for 5\nI0811 09:32:46.012510    2866 log.go:172] (0xc0008b64d0) Data frame received for 5\nI0811 09:32:46.012533    2866 log.go:172] (0xc0005b8140) (5) Data frame handling\nI0811 09:32:46.012540    2866 log.go:172] (0xc0005b8140) (5) Data frame sent\nI0811 09:32:46.012546    2866 log.go:172] (0xc0008b64d0) Data frame received for 5\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0811 09:32:46.012550    2866 log.go:172] (0xc0005b8140) (5) Data frame handling\nI0811 09:32:46.012591    2866 log.go:172] (0xc0008b64d0) Data frame received for 3\nI0811 09:32:46.012623    2866 log.go:172] (0xc00086e000) (3) Data frame handling\nI0811 09:32:46.012646    2866 log.go:172] (0xc00086e000) (3) Data frame sent\nI0811 09:32:46.012667    2866 log.go:172] (0xc0008b64d0) Data frame received for 3\nI0811 09:32:46.012679    2866 log.go:172] (0xc00086e000) (3) Data frame handling\nI0811 09:32:46.017128    2866 log.go:172] (0xc0008b64d0) Data frame received for 1\nI0811 09:32:46.017151    2866 log.go:172] (0xc00086e8c0) (1) Data frame handling\nI0811 09:32:46.017163    2866 log.go:172] (0xc00086e8c0) (1) Data frame sent\nI0811 09:32:46.017179    2866 log.go:172] (0xc0008b64d0) (0xc00086e8c0) Stream removed, broadcasting: 1\nI0811 09:32:46.017199    2866 log.go:172] (0xc0008b64d0) Go away received\nI0811 09:32:46.017442    2866 log.go:172] (0xc0008b64d0) (0xc00086e8c0) Stream removed, broadcasting: 1\nI0811 09:32:46.017453    2866 log.go:172] (0xc0008b64d0) (0xc00086e000) Stream removed, broadcasting: 3\nI0811 09:32:46.017458    2866 log.go:172] (0xc0008b64d0) (0xc0005b8140) Stream removed, broadcasting: 5\n"
Aug 11 09:32:46.022: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 11 09:32:46.022: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 11 09:32:46.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 11 09:32:46.322: INFO: stderr: "I0811 09:32:46.176987    2887 log.go:172] (0xc000730630) (0xc0005f28c0) Create stream\nI0811 09:32:46.177034    2887 log.go:172] (0xc000730630) (0xc0005f28c0) Stream added, broadcasting: 1\nI0811 09:32:46.179235    2887 log.go:172] (0xc000730630) Reply frame received for 1\nI0811 09:32:46.179258    2887 log.go:172] (0xc000730630) (0xc0005f21e0) Create stream\nI0811 09:32:46.179264    2887 log.go:172] (0xc000730630) (0xc0005f21e0) Stream added, broadcasting: 3\nI0811 09:32:46.179864    2887 log.go:172] (0xc000730630) Reply frame received for 3\nI0811 09:32:46.179887    2887 log.go:172] (0xc000730630) (0xc000291a40) Create stream\nI0811 09:32:46.179894    2887 log.go:172] (0xc000730630) (0xc000291a40) Stream added, broadcasting: 5\nI0811 09:32:46.180422    2887 log.go:172] (0xc000730630) Reply frame received for 5\nI0811 09:32:46.246941    2887 log.go:172] (0xc000730630) Data frame received for 5\nI0811 09:32:46.246971    2887 log.go:172] (0xc000291a40) (5) Data frame handling\nI0811 09:32:46.246988    2887 log.go:172] (0xc000291a40) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0811 09:32:46.312591    2887 log.go:172] (0xc000730630) Data frame received for 3\nI0811 09:32:46.312630    2887 log.go:172] (0xc0005f21e0) (3) Data frame handling\nI0811 09:32:46.312674    2887 log.go:172] (0xc0005f21e0) (3) Data frame sent\nI0811 09:32:46.312716    2887 log.go:172] (0xc000730630) Data frame received for 3\nI0811 09:32:46.312812    2887 log.go:172] (0xc0005f21e0) (3) Data frame handling\nI0811 09:32:46.312850    2887 log.go:172] (0xc000730630) Data frame received for 5\nI0811 09:32:46.312880    2887 log.go:172] (0xc000291a40) (5) Data frame handling\nI0811 09:32:46.314659    2887 log.go:172] (0xc000730630) Data frame received for 1\nI0811 09:32:46.314679    2887 log.go:172] (0xc0005f28c0) (1) Data frame handling\nI0811 09:32:46.314689    2887 log.go:172] (0xc0005f28c0) (1) Data frame sent\nI0811 09:32:46.314703    2887 log.go:172] (0xc000730630) (0xc0005f28c0) Stream removed, broadcasting: 1\nI0811 09:32:46.314716    2887 log.go:172] (0xc000730630) Go away received\nI0811 09:32:46.315032    2887 log.go:172] (0xc000730630) (0xc0005f28c0) Stream removed, broadcasting: 1\nI0811 09:32:46.315055    2887 log.go:172] (0xc000730630) (0xc0005f21e0) Stream removed, broadcasting: 3\nI0811 09:32:46.315066    2887 log.go:172] (0xc000730630) (0xc000291a40) Stream removed, broadcasting: 5\n"
Aug 11 09:32:46.322: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 11 09:32:46.322: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 11 09:32:46.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 11 09:32:46.582: INFO: stderr: "I0811 09:32:46.449642    2904 log.go:172] (0xc000876630) (0xc0003ac820) Create stream\nI0811 09:32:46.449690    2904 log.go:172] (0xc000876630) (0xc0003ac820) Stream added, broadcasting: 1\nI0811 09:32:46.452012    2904 log.go:172] (0xc000876630) Reply frame received for 1\nI0811 09:32:46.452040    2904 log.go:172] (0xc000876630) (0xc000922000) Create stream\nI0811 09:32:46.452048    2904 log.go:172] (0xc000876630) (0xc000922000) Stream added, broadcasting: 3\nI0811 09:32:46.452702    2904 log.go:172] (0xc000876630) Reply frame received for 3\nI0811 09:32:46.452720    2904 log.go:172] (0xc000876630) (0xc0003ac8c0) Create stream\nI0811 09:32:46.452824    2904 log.go:172] (0xc000876630) (0xc0003ac8c0) Stream added, broadcasting: 5\nI0811 09:32:46.453436    2904 log.go:172] (0xc000876630) Reply frame received for 5\nI0811 09:32:46.505562    2904 log.go:172] (0xc000876630) Data frame received for 5\nI0811 09:32:46.505585    2904 log.go:172] (0xc0003ac8c0) (5) Data frame handling\nI0811 09:32:46.505599    2904 log.go:172] (0xc0003ac8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0811 09:32:46.573097    2904 log.go:172] (0xc000876630) Data frame received for 3\nI0811 09:32:46.573247    2904 log.go:172] (0xc000922000) (3) Data frame handling\nI0811 09:32:46.573362    2904 log.go:172] (0xc000922000) (3) Data frame sent\nI0811 09:32:46.573386    2904 log.go:172] (0xc000876630) Data frame received for 3\nI0811 09:32:46.573397    2904 log.go:172] (0xc000922000) (3) Data frame handling\nI0811 09:32:46.573457    2904 log.go:172] (0xc000876630) Data frame received for 5\nI0811 09:32:46.573494    2904 log.go:172] (0xc0003ac8c0) (5) Data frame handling\nI0811 09:32:46.574957    2904 log.go:172] (0xc000876630) Data frame received for 1\nI0811 09:32:46.574973    2904 log.go:172] (0xc0003ac820) (1) Data frame handling\nI0811 09:32:46.574979    2904 log.go:172] (0xc0003ac820) (1) Data frame sent\nI0811 09:32:46.574993    2904 log.go:172] (0xc000876630) (0xc0003ac820) Stream removed, broadcasting: 1\nI0811 09:32:46.575008    2904 log.go:172] (0xc000876630) Go away received\nI0811 09:32:46.575385    2904 log.go:172] (0xc000876630) (0xc0003ac820) Stream removed, broadcasting: 1\nI0811 09:32:46.575405    2904 log.go:172] (0xc000876630) (0xc000922000) Stream removed, broadcasting: 3\nI0811 09:32:46.575415    2904 log.go:172] (0xc000876630) (0xc0003ac8c0) Stream removed, broadcasting: 5\n"
Aug 11 09:32:46.582: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 11 09:32:46.582: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 11 09:32:46.582: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 09:32:46.585: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Aug 11 09:32:56.592: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 09:32:56.592: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 09:32:56.592: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 11 09:32:56.825: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 11 09:32:56.825: INFO: ss-0  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  }]
Aug 11 09:32:56.825: INFO: ss-1  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:32:56.825: INFO: ss-2  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:32:56.825: INFO: 
Aug 11 09:32:56.825: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 09:32:59.240: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 11 09:32:59.240: INFO: ss-0  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  }]
Aug 11 09:32:59.240: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:32:59.240: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:32:59.240: INFO: 
Aug 11 09:32:59.240: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 09:33:00.472: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 11 09:33:00.472: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  }]
Aug 11 09:33:00.472: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:33:00.472: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:33:00.472: INFO: 
Aug 11 09:33:00.472: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 09:33:01.489: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 11 09:33:01.489: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  }]
Aug 11 09:33:01.489: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:33:01.489: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:33:01.489: INFO: 
Aug 11 09:33:01.489: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 09:33:02.494: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 11 09:33:02.494: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  }]
Aug 11 09:33:02.494: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:33:02.494: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:33:02.494: INFO: 
Aug 11 09:33:02.494: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 09:33:03.497: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 11 09:33:03.497: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  }]
Aug 11 09:33:03.497: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:33:03.497: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:33:03.497: INFO: 
Aug 11 09:33:03.497: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 09:33:04.754: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 11 09:33:04.754: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  }]
Aug 11 09:33:04.754: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:33:04.754: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:33:04.755: INFO: 
Aug 11 09:33:04.755: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 11 09:33:06.035: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 11 09:33:06.035: INFO: ss-0  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:04 +0000 UTC  }]
Aug 11 09:33:06.035: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:33:06.035: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-11 09:32:25 +0000 UTC  }]
Aug 11 09:33:06.035: INFO: 
Aug 11 09:33:06.035: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9429
Aug 11 09:33:07.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:33:07.159: INFO: rc: 1
Aug 11 09:33:07.159: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0028be360 exit status 1   true [0xc002dfe170 0xc002dfe188 0xc002dfe1a0] [0xc002dfe170 0xc002dfe188 0xc002dfe1a0] [0xc002dfe180 0xc002dfe198] [0xba7140 0xba7140] 0xc003094b40 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Aug 11 09:33:17.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:33:17.246: INFO: rc: 1
Aug 11 09:33:17.246: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028be420 exit status 1   true [0xc002dfe1a8 0xc002dfe1c0 0xc002dfe1d8] [0xc002dfe1a8 0xc002dfe1c0 0xc002dfe1d8] [0xc002dfe1b8 0xc002dfe1d0] [0xba7140 0xba7140] 0xc003094ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:33:27.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:33:27.340: INFO: rc: 1
Aug 11 09:33:27.340: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002fd3ad0 exit status 1   true [0xc002e64078 0xc002e64090 0xc002e640a8] [0xc002e64078 0xc002e64090 0xc002e640a8] [0xc002e64088 0xc002e640a0] [0xba7140 0xba7140] 0xc002bebec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:33:37.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:33:37.441: INFO: rc: 1
Aug 11 09:33:37.441: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028be4e0 exit status 1   true [0xc002dfe1e0 0xc002dfe1f8 0xc002dfe210] [0xc002dfe1e0 0xc002dfe1f8 0xc002dfe210] [0xc002dfe1f0 0xc002dfe208] [0xba7140 0xba7140] 0xc003095200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:33:47.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:33:47.540: INFO: rc: 1
Aug 11 09:33:47.541: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028be5a0 exit status 1   true [0xc002dfe218 0xc002dfe238 0xc002dfe250] [0xc002dfe218 0xc002dfe238 0xc002dfe250] [0xc002dfe230 0xc002dfe248] [0xba7140 0xba7140] 0xc003095500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:33:57.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:33:57.636: INFO: rc: 1
Aug 11 09:33:57.636: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000397dd0 exit status 1   true [0xc0027d0688 0xc0027d06a0 0xc0027d06b8] [0xc0027d0688 0xc0027d06a0 0xc0027d06b8] [0xc0027d0698 0xc0027d06b0] [0xba7140 0xba7140] 0xc0025f7200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:34:07.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:34:07.742: INFO: rc: 1
Aug 11 09:34:07.742: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f2a090 exit status 1   true [0xc0000ea4c0 0xc0000ea5f0 0xc0000ea6c8] [0xc0000ea4c0 0xc0000ea5f0 0xc0000ea6c8] [0xc0000ea588 0xc0000ea680] [0xba7140 0xba7140] 0xc001e0fa40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:34:17.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:34:17.831: INFO: rc: 1
Aug 11 09:34:17.831: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002b90150 exit status 1   true [0xc000512038 0xc000d5a0b8 0xc000d5a0f8] [0xc000512038 0xc000d5a0b8 0xc000d5a0f8] [0xc000d5a0a8 0xc000d5a0e8] [0xba7140 0xba7140] 0xc002746420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:34:27.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:34:27.940: INFO: rc: 1
Aug 11 09:34:27.940: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00131e060 exit status 1   true [0xc000976088 0xc0009762a8 0xc000976400] [0xc000976088 0xc0009762a8 0xc000976400] [0xc000976280 0xc000976348] [0xba7140 0xba7140] 0xc0021b30e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:34:37.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:34:38.031: INFO: rc: 1
Aug 11 09:34:38.031: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002d7c090 exit status 1   true [0xc002dfe000 0xc002dfe018 0xc002dfe030] [0xc002dfe000 0xc002dfe018 0xc002dfe030] [0xc002dfe010 0xc002dfe028] [0xba7140 0xba7140] 0xc0021a0f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:34:48.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:34:48.142: INFO: rc: 1
Aug 11 09:34:48.142: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00131e150 exit status 1   true [0xc0009764d0 0xc0009767b0 0xc000976930] [0xc0009764d0 0xc0009767b0 0xc000976930] [0xc0009766e8 0xc000976908] [0xba7140 0xba7140] 0xc001b02b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:34:58.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:34:58.226: INFO: rc: 1
Aug 11 09:34:58.226: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00131e210 exit status 1   true [0xc000976a00 0xc000976ab0 0xc000976ba0] [0xc000976a00 0xc000976ab0 0xc000976ba0] [0xc000976a78 0xc000976b48] [0xba7140 0xba7140] 0xc0021fac60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:35:08.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:35:08.349: INFO: rc: 1
Aug 11 09:35:08.349: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002b902d0 exit status 1   true [0xc000d5a100 0xc000d5a178 0xc000d5a250] [0xc000d5a100 0xc000d5a178 0xc000d5a250] [0xc000d5a150 0xc000d5a220] [0xba7140 0xba7140] 0xc002746ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:35:18.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:35:18.434: INFO: rc: 1
Aug 11 09:35:18.435: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002b90390 exit status 1   true [0xc000d5a260 0xc000d5a328 0xc000d5a370] [0xc000d5a260 0xc000d5a328 0xc000d5a370] [0xc000d5a310 0xc000d5a350] [0xba7140 0xba7140] 0xc002747e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:35:28.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:35:28.531: INFO: rc: 1
Aug 11 09:35:28.531: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002b90450 exit status 1   true [0xc000d5a388 0xc000d5a4a0 0xc000d5a548] [0xc000d5a388 0xc000d5a4a0 0xc000d5a548] [0xc000d5a490 0xc000d5a4b8] [0xba7140 0xba7140] 0xc001ea3140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:35:38.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:35:38.857: INFO: rc: 1
Aug 11 09:35:38.857: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002b90510 exit status 1   true [0xc000d5a578 0xc000d5a638 0xc000d5a658] [0xc000d5a578 0xc000d5a638 0xc000d5a658] [0xc000d5a620 0xc000d5a648] [0xba7140 0xba7140] 0xc001e19560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:35:48.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:35:49.020: INFO: rc: 1
Aug 11 09:35:49.020: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002b90600 exit status 1   true [0xc000d5a690 0xc000d5a730 0xc000d5a7c0] [0xc000d5a690 0xc000d5a730 0xc000d5a7c0] [0xc000d5a6d0 0xc000d5a760] [0xba7140 0xba7140] 0xc001790de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:35:59.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:35:59.309: INFO: rc: 1
Aug 11 09:35:59.309: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f2a1e0 exit status 1   true [0xc0000ea7b0 0xc0000ea860 0xc0000ea898] [0xc0000ea7b0 0xc0000ea860 0xc0000ea898] [0xc0000ea840 0xc0000ea888] [0xba7140 0xba7140] 0xc0019dde00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:36:09.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:36:09.413: INFO: rc: 1
Aug 11 09:36:09.413: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002d7c060 exit status 1   true [0xc002dfe000 0xc002dfe018 0xc002dfe030] [0xc002dfe000 0xc002dfe018 0xc002dfe030] [0xc002dfe010 0xc002dfe028] [0xba7140 0xba7140] 0xc001718720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:36:19.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:36:19.506: INFO: rc: 1
Aug 11 09:36:19.506: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002b900f0 exit status 1   true [0xc000976088 0xc0009762a8 0xc000976400] [0xc000976088 0xc0009762a8 0xc000976400] [0xc000976280 0xc000976348] [0xba7140 0xba7140] 0xc001c31da0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:36:29.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:36:29.601: INFO: rc: 1
Aug 11 09:36:29.601: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f2a0f0 exit status 1   true [0xc000d5a018 0xc000d5a0c8 0xc000d5a100] [0xc000d5a018 0xc000d5a0c8 0xc000d5a100] [0xc000d5a0b8 0xc000d5a0f8] [0xba7140 0xba7140] 0xc001ea24e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:36:39.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:36:39.700: INFO: rc: 1
Aug 11 09:36:39.700: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002b90300 exit status 1   true [0xc0009764d0 0xc0009767b0 0xc000976930] [0xc0009764d0 0xc0009767b0 0xc000976930] [0xc0009766e8 0xc000976908] [0xba7140 0xba7140] 0xc0021b20c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:36:49.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:36:49.792: INFO: rc: 1
Aug 11 09:36:49.792: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f2a1b0 exit status 1   true [0xc000d5a110 0xc000d5a1e8 0xc000d5a260] [0xc000d5a110 0xc000d5a1e8 0xc000d5a260] [0xc000d5a178 0xc000d5a250] [0xba7140 0xba7140] 0xc002746240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:36:59.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:37:00.685: INFO: rc: 1
Aug 11 09:37:00.685: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002b90420 exit status 1   true [0xc000976a00 0xc000976ab0 0xc000976ba0] [0xc000976a00 0xc000976ab0 0xc000976ba0] [0xc000976a78 0xc000976b48] [0xba7140 0xba7140] 0xc0021b3440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:37:10.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:37:14.835: INFO: rc: 1
Aug 11 09:37:14.835: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00131e0f0 exit status 1   true [0xc0000ea328 0xc0000ea588 0xc0000ea680] [0xc0000ea328 0xc0000ea588 0xc0000ea680] [0xc0000ea568 0xc0000ea608] [0xba7140 0xba7140] 0xc001f29aa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:37:24.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:37:25.112: INFO: rc: 1
Aug 11 09:37:25.112: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002d7c1b0 exit status 1   true [0xc002dfe038 0xc002dfe050 0xc002dfe068] [0xc002dfe038 0xc002dfe050 0xc002dfe068] [0xc002dfe048 0xc002dfe060] [0xba7140 0xba7140] 0xc0021a0f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:37:35.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:37:35.202: INFO: rc: 1
Aug 11 09:37:35.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f2a2d0 exit status 1   true [0xc000d5a298 0xc000d5a340 0xc000d5a388] [0xc000d5a298 0xc000d5a340 0xc000d5a388] [0xc000d5a328 0xc000d5a370] [0xba7140 0xba7140] 0xc002746720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:37:45.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:37:45.289: INFO: rc: 1
Aug 11 09:37:45.289: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002b90570 exit status 1   true [0xc000976bc8 0xc000976d48 0xc000976f08] [0xc000976bc8 0xc000976d48 0xc000976f08] [0xc000976c68 0xc000976eb0] [0xba7140 0xba7140] 0xc0021faba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:37:55.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:37:55.382: INFO: rc: 1
Aug 11 09:37:55.382: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f2a3c0 exit status 1   true [0xc000d5a450 0xc000d5a4a8 0xc000d5a578] [0xc000d5a450 0xc000d5a4a8 0xc000d5a578] [0xc000d5a4a0 0xc000d5a548] [0xba7140 0xba7140] 0xc002746d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:38:05.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:38:05.472: INFO: rc: 1
Aug 11 09:38:05.472: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011a00c0 exit status 1   true [0xc002632018 0xc002632030 0xc002632048] [0xc002632018 0xc002632030 0xc002632048] [0xc002632028 0xc002632040] [0xba7140 0xba7140] 0xc0017b73e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 11 09:38:15.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9429 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 11 09:38:15.563: INFO: rc: 1
Aug 11 09:38:15.563: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Aug 11 09:38:15.563: INFO: Scaling statefulset ss to 0
Aug 11 09:38:15.568: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 11 09:38:15.569: INFO: Deleting all statefulset in ns statefulset-9429
Aug 11 09:38:15.571: INFO: Scaling statefulset ss to 0
Aug 11 09:38:15.575: INFO: Waiting for statefulset status.replicas updated to 0
Aug 11 09:38:15.576: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:38:15.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9429" for this suite.
Aug 11 09:38:21.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:38:21.747: INFO: namespace statefulset-9429 deletion completed in 6.12155273s

• [SLOW TEST:379.664 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:38:21.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 11 09:38:21.827: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f56c88b-8669-4eef-b03d-2eeb8e0ef3de" in namespace "downward-api-7141" to be "success or failure"
Aug 11 09:38:21.832: INFO: Pod "downwardapi-volume-1f56c88b-8669-4eef-b03d-2eeb8e0ef3de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.522705ms
Aug 11 09:38:23.834: INFO: Pod "downwardapi-volume-1f56c88b-8669-4eef-b03d-2eeb8e0ef3de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007289626s
Aug 11 09:38:25.838: INFO: Pod "downwardapi-volume-1f56c88b-8669-4eef-b03d-2eeb8e0ef3de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010798292s
STEP: Saw pod success
Aug 11 09:38:25.838: INFO: Pod "downwardapi-volume-1f56c88b-8669-4eef-b03d-2eeb8e0ef3de" satisfied condition "success or failure"
Aug 11 09:38:25.840: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1f56c88b-8669-4eef-b03d-2eeb8e0ef3de container client-container: 
STEP: delete the pod
Aug 11 09:38:25.873: INFO: Waiting for pod downwardapi-volume-1f56c88b-8669-4eef-b03d-2eeb8e0ef3de to disappear
Aug 11 09:38:25.890: INFO: Pod downwardapi-volume-1f56c88b-8669-4eef-b03d-2eeb8e0ef3de no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:38:25.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7141" for this suite.
Aug 11 09:38:31.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:38:31.976: INFO: namespace downward-api-7141 deletion completed in 6.083534129s

• [SLOW TEST:10.229 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:38:31.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:38:37.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3232" for this suite.
Aug 11 09:38:43.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:38:43.714: INFO: namespace watch-3232 deletion completed in 6.182096091s

• [SLOW TEST:11.737 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:38:43.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-zg59
STEP: Creating a pod to test atomic-volume-subpath
Aug 11 09:38:43.786: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zg59" in namespace "subpath-600" to be "success or failure"
Aug 11 09:38:43.790: INFO: Pod "pod-subpath-test-configmap-zg59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.575555ms
Aug 11 09:38:45.792: INFO: Pod "pod-subpath-test-configmap-zg59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0067394s
Aug 11 09:38:48.112: INFO: Pod "pod-subpath-test-configmap-zg59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32605396s
Aug 11 09:38:50.115: INFO: Pod "pod-subpath-test-configmap-zg59": Phase="Running", Reason="", readiness=true. Elapsed: 6.329496267s
Aug 11 09:38:52.118: INFO: Pod "pod-subpath-test-configmap-zg59": Phase="Running", Reason="", readiness=true. Elapsed: 8.332160279s
Aug 11 09:38:54.121: INFO: Pod "pod-subpath-test-configmap-zg59": Phase="Running", Reason="", readiness=true. Elapsed: 10.335715953s
Aug 11 09:38:56.125: INFO: Pod "pod-subpath-test-configmap-zg59": Phase="Running", Reason="", readiness=true. Elapsed: 12.339187709s
Aug 11 09:38:58.127: INFO: Pod "pod-subpath-test-configmap-zg59": Phase="Running", Reason="", readiness=true. Elapsed: 14.341715359s
Aug 11 09:39:00.131: INFO: Pod "pod-subpath-test-configmap-zg59": Phase="Running", Reason="", readiness=true. Elapsed: 16.344929679s
Aug 11 09:39:02.134: INFO: Pod "pod-subpath-test-configmap-zg59": Phase="Running", Reason="", readiness=true. Elapsed: 18.348613242s
Aug 11 09:39:04.138: INFO: Pod "pod-subpath-test-configmap-zg59": Phase="Running", Reason="", readiness=true. Elapsed: 20.352098612s
Aug 11 09:39:06.141: INFO: Pod "pod-subpath-test-configmap-zg59": Phase="Running", Reason="", readiness=true. Elapsed: 22.355623966s
Aug 11 09:39:08.145: INFO: Pod "pod-subpath-test-configmap-zg59": Phase="Running", Reason="", readiness=true. Elapsed: 24.359058997s
Aug 11 09:39:10.148: INFO: Pod "pod-subpath-test-configmap-zg59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.362321151s
STEP: Saw pod success
Aug 11 09:39:10.148: INFO: Pod "pod-subpath-test-configmap-zg59" satisfied condition "success or failure"
Aug 11 09:39:10.151: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-zg59 container test-container-subpath-configmap-zg59: 
STEP: delete the pod
Aug 11 09:39:10.439: INFO: Waiting for pod pod-subpath-test-configmap-zg59 to disappear
Aug 11 09:39:10.609: INFO: Pod pod-subpath-test-configmap-zg59 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-zg59
Aug 11 09:39:10.609: INFO: Deleting pod "pod-subpath-test-configmap-zg59" in namespace "subpath-600"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:39:10.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-600" for this suite.
Aug 11 09:39:16.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:39:16.695: INFO: namespace subpath-600 deletion completed in 6.080886608s

• [SLOW TEST:32.980 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:39:16.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-73bbe135-1174-4398-9780-529bc1ea67fb
STEP: Creating a pod to test consume configMaps
Aug 11 09:39:16.861: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f92144f-51b4-439e-b069-f3c1f8d96194" in namespace "configmap-6625" to be "success or failure"
Aug 11 09:39:16.874: INFO: Pod "pod-configmaps-5f92144f-51b4-439e-b069-f3c1f8d96194": Phase="Pending", Reason="", readiness=false. Elapsed: 12.603218ms
Aug 11 09:39:18.915: INFO: Pod "pod-configmaps-5f92144f-51b4-439e-b069-f3c1f8d96194": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053951595s
Aug 11 09:39:20.921: INFO: Pod "pod-configmaps-5f92144f-51b4-439e-b069-f3c1f8d96194": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059775331s
Aug 11 09:39:22.981: INFO: Pod "pod-configmaps-5f92144f-51b4-439e-b069-f3c1f8d96194": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11966526s
Aug 11 09:39:25.024: INFO: Pod "pod-configmaps-5f92144f-51b4-439e-b069-f3c1f8d96194": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.162232728s
STEP: Saw pod success
Aug 11 09:39:25.024: INFO: Pod "pod-configmaps-5f92144f-51b4-439e-b069-f3c1f8d96194" satisfied condition "success or failure"
Aug 11 09:39:25.028: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-5f92144f-51b4-439e-b069-f3c1f8d96194 container configmap-volume-test: 
STEP: delete the pod
Aug 11 09:39:25.299: INFO: Waiting for pod pod-configmaps-5f92144f-51b4-439e-b069-f3c1f8d96194 to disappear
Aug 11 09:39:25.550: INFO: Pod pod-configmaps-5f92144f-51b4-439e-b069-f3c1f8d96194 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:39:25.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6625" for this suite.
Aug 11 09:39:31.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:39:31.747: INFO: namespace configmap-6625 deletion completed in 6.192448744s

• [SLOW TEST:15.052 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:39:31.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 11 09:39:31.900: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66d66889-5178-4aae-ac99-0c92c8c91624" in namespace "downward-api-8818" to be "success or failure"
Aug 11 09:39:31.916: INFO: Pod "downwardapi-volume-66d66889-5178-4aae-ac99-0c92c8c91624": Phase="Pending", Reason="", readiness=false. Elapsed: 15.750567ms
Aug 11 09:39:33.920: INFO: Pod "downwardapi-volume-66d66889-5178-4aae-ac99-0c92c8c91624": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020133395s
Aug 11 09:39:35.924: INFO: Pod "downwardapi-volume-66d66889-5178-4aae-ac99-0c92c8c91624": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02380238s
Aug 11 09:39:37.928: INFO: Pod "downwardapi-volume-66d66889-5178-4aae-ac99-0c92c8c91624": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027728241s
STEP: Saw pod success
Aug 11 09:39:37.928: INFO: Pod "downwardapi-volume-66d66889-5178-4aae-ac99-0c92c8c91624" satisfied condition "success or failure"
Aug 11 09:39:37.931: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-66d66889-5178-4aae-ac99-0c92c8c91624 container client-container: 
STEP: delete the pod
Aug 11 09:39:38.023: INFO: Waiting for pod downwardapi-volume-66d66889-5178-4aae-ac99-0c92c8c91624 to disappear
Aug 11 09:39:38.202: INFO: Pod downwardapi-volume-66d66889-5178-4aae-ac99-0c92c8c91624 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:39:38.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8818" for this suite.
Aug 11 09:39:46.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:39:46.432: INFO: namespace downward-api-8818 deletion completed in 8.159107198s

• [SLOW TEST:14.685 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:39:46.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-4251/secret-test-744576cd-f449-4add-a6e5-966af54a0ec1
STEP: Creating a pod to test consume secrets
Aug 11 09:39:46.555: INFO: Waiting up to 5m0s for pod "pod-configmaps-3f8eff59-1a79-45fd-9ae7-d1bdd7f914f0" in namespace "secrets-4251" to be "success or failure"
Aug 11 09:39:46.559: INFO: Pod "pod-configmaps-3f8eff59-1a79-45fd-9ae7-d1bdd7f914f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172025ms
Aug 11 09:39:48.563: INFO: Pod "pod-configmaps-3f8eff59-1a79-45fd-9ae7-d1bdd7f914f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007704388s
Aug 11 09:39:50.570: INFO: Pod "pod-configmaps-3f8eff59-1a79-45fd-9ae7-d1bdd7f914f0": Phase="Running", Reason="", readiness=true. Elapsed: 4.014793091s
Aug 11 09:39:52.573: INFO: Pod "pod-configmaps-3f8eff59-1a79-45fd-9ae7-d1bdd7f914f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01828398s
STEP: Saw pod success
Aug 11 09:39:52.573: INFO: Pod "pod-configmaps-3f8eff59-1a79-45fd-9ae7-d1bdd7f914f0" satisfied condition "success or failure"
Aug 11 09:39:52.576: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-3f8eff59-1a79-45fd-9ae7-d1bdd7f914f0 container env-test: 
STEP: delete the pod
Aug 11 09:39:52.594: INFO: Waiting for pod pod-configmaps-3f8eff59-1a79-45fd-9ae7-d1bdd7f914f0 to disappear
Aug 11 09:39:52.617: INFO: Pod pod-configmaps-3f8eff59-1a79-45fd-9ae7-d1bdd7f914f0 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:39:52.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4251" for this suite.
Aug 11 09:39:58.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:39:58.749: INFO: namespace secrets-4251 deletion completed in 6.128456238s

• [SLOW TEST:12.317 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:39:58.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 11 09:39:59.124: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4767,SelfLink:/api/v1/namespaces/watch-4767/configmaps/e2e-watch-test-resource-version,UID:5e0fa0b2-d777-4b3e-ae15-75d2715de1ad,ResourceVersion:4168373,Generation:0,CreationTimestamp:2020-08-11 09:39:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 11 09:39:59.124: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4767,SelfLink:/api/v1/namespaces/watch-4767/configmaps/e2e-watch-test-resource-version,UID:5e0fa0b2-d777-4b3e-ae15-75d2715de1ad,ResourceVersion:4168374,Generation:0,CreationTimestamp:2020-08-11 09:39:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:39:59.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4767" for this suite.
Aug 11 09:40:05.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:40:05.216: INFO: namespace watch-4767 deletion completed in 6.083909181s

• [SLOW TEST:6.467 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:40:05.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 11 09:40:10.515: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:40:11.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9105" for this suite.
Aug 11 09:40:37.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:40:37.637: INFO: namespace replicaset-9105 deletion completed in 26.092956367s

• [SLOW TEST:32.420 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:40:37.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Aug 11 09:40:37.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7166'
Aug 11 09:40:38.019: INFO: stderr: ""
Aug 11 09:40:38.019: INFO: stdout: "pod/pause created\n"
Aug 11 09:40:38.019: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 11 09:40:38.019: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7166" to be "running and ready"
Aug 11 09:40:38.054: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 34.562577ms
Aug 11 09:40:40.058: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03836064s
Aug 11 09:40:42.062: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.042496158s
Aug 11 09:40:42.062: INFO: Pod "pause" satisfied condition "running and ready"
Aug 11 09:40:42.062: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 11 09:40:42.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7166'
Aug 11 09:40:42.168: INFO: stderr: ""
Aug 11 09:40:42.168: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 11 09:40:42.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7166'
Aug 11 09:40:42.350: INFO: stderr: ""
Aug 11 09:40:42.350: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 11 09:40:42.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7166'
Aug 11 09:40:42.452: INFO: stderr: ""
Aug 11 09:40:42.452: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 11 09:40:42.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7166'
Aug 11 09:40:42.544: INFO: stderr: ""
Aug 11 09:40:42.544: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Aug 11 09:40:42.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7166'
Aug 11 09:40:43.510: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 11 09:40:43.510: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 11 09:40:43.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7166'
Aug 11 09:40:43.781: INFO: stderr: "No resources found.\n"
Aug 11 09:40:43.781: INFO: stdout: ""
Aug 11 09:40:43.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7166 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 11 09:40:43.885: INFO: stderr: ""
Aug 11 09:40:43.885: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:40:43.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7166" for this suite.
Aug 11 09:40:50.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:40:50.258: INFO: namespace kubectl-7166 deletion completed in 6.3697616s

• [SLOW TEST:12.621 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:40:50.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-016eb098-5b8e-4fd5-b80a-8649962d30b8
STEP: Creating a pod to test consume secrets
Aug 11 09:40:50.469: INFO: Waiting up to 5m0s for pod "pod-secrets-5b001f31-292c-4fa1-823a-d9b998d03300" in namespace "secrets-8578" to be "success or failure"
Aug 11 09:40:50.486: INFO: Pod "pod-secrets-5b001f31-292c-4fa1-823a-d9b998d03300": Phase="Pending", Reason="", readiness=false. Elapsed: 16.743016ms
Aug 11 09:40:52.490: INFO: Pod "pod-secrets-5b001f31-292c-4fa1-823a-d9b998d03300": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021040514s
Aug 11 09:40:54.493: INFO: Pod "pod-secrets-5b001f31-292c-4fa1-823a-d9b998d03300": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024283941s
Aug 11 09:40:56.498: INFO: Pod "pod-secrets-5b001f31-292c-4fa1-823a-d9b998d03300": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028435474s
STEP: Saw pod success
Aug 11 09:40:56.498: INFO: Pod "pod-secrets-5b001f31-292c-4fa1-823a-d9b998d03300" satisfied condition "success or failure"
Aug 11 09:40:56.500: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-5b001f31-292c-4fa1-823a-d9b998d03300 container secret-volume-test: 
STEP: delete the pod
Aug 11 09:40:56.530: INFO: Waiting for pod pod-secrets-5b001f31-292c-4fa1-823a-d9b998d03300 to disappear
Aug 11 09:40:56.533: INFO: Pod pod-secrets-5b001f31-292c-4fa1-823a-d9b998d03300 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:40:56.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8578" for this suite.
Aug 11 09:41:02.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:41:02.670: INFO: namespace secrets-8578 deletion completed in 6.13347753s

• [SLOW TEST:12.411 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:41:02.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-1120
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1120 to expose endpoints map[]
Aug 11 09:41:02.824: INFO: Get endpoints failed (16.999775ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Aug 11 09:41:03.828: INFO: successfully validated that service endpoint-test2 in namespace services-1120 exposes endpoints map[] (1.020412321s elapsed)
STEP: Creating pod pod1 in namespace services-1120
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1120 to expose endpoints map[pod1:[80]]
Aug 11 09:41:08.032: INFO: successfully validated that service endpoint-test2 in namespace services-1120 exposes endpoints map[pod1:[80]] (4.106117982s elapsed)
STEP: Creating pod pod2 in namespace services-1120
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1120 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 11 09:41:11.088: INFO: successfully validated that service endpoint-test2 in namespace services-1120 exposes endpoints map[pod1:[80] pod2:[80]] (3.052496217s elapsed)
STEP: Deleting pod pod1 in namespace services-1120
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1120 to expose endpoints map[pod2:[80]]
Aug 11 09:41:12.128: INFO: successfully validated that service endpoint-test2 in namespace services-1120 exposes endpoints map[pod2:[80]] (1.036543422s elapsed)
STEP: Deleting pod pod2 in namespace services-1120
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1120 to expose endpoints map[]
Aug 11 09:41:13.200: INFO: successfully validated that service endpoint-test2 in namespace services-1120 exposes endpoints map[] (1.069477064s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:41:13.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1120" for this suite.
Aug 11 09:41:35.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:41:35.524: INFO: namespace services-1120 deletion completed in 22.230255691s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:32.854 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 11 09:41:35.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 11 09:41:35.956: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Aug 11 09:41:36.074: INFO: Number of nodes with available pods: 0
Aug 11 09:41:36.074: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Aug 11 09:41:36.662: INFO: Number of nodes with available pods: 0
Aug 11 09:41:36.662: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:37.687: INFO: Number of nodes with available pods: 0
Aug 11 09:41:37.687: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:38.683: INFO: Number of nodes with available pods: 0
Aug 11 09:41:38.683: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:39.899: INFO: Number of nodes with available pods: 0
Aug 11 09:41:39.899: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:40.675: INFO: Number of nodes with available pods: 0
Aug 11 09:41:40.675: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:41.669: INFO: Number of nodes with available pods: 1
Aug 11 09:41:41.669: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Aug 11 09:41:41.744: INFO: Number of nodes with available pods: 1
Aug 11 09:41:41.744: INFO: Number of running nodes: 0, number of available pods: 1
Aug 11 09:41:42.746: INFO: Number of nodes with available pods: 0
Aug 11 09:41:42.746: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Aug 11 09:41:42.801: INFO: Number of nodes with available pods: 0
Aug 11 09:41:42.801: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:43.845: INFO: Number of nodes with available pods: 0
Aug 11 09:41:43.845: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:44.804: INFO: Number of nodes with available pods: 0
Aug 11 09:41:44.804: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:45.804: INFO: Number of nodes with available pods: 0
Aug 11 09:41:45.804: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:46.805: INFO: Number of nodes with available pods: 0
Aug 11 09:41:46.805: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:47.805: INFO: Number of nodes with available pods: 0
Aug 11 09:41:47.805: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:48.804: INFO: Number of nodes with available pods: 0
Aug 11 09:41:48.804: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:49.971: INFO: Number of nodes with available pods: 0
Aug 11 09:41:49.971: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:50.804: INFO: Number of nodes with available pods: 0
Aug 11 09:41:50.804: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:51.805: INFO: Number of nodes with available pods: 0
Aug 11 09:41:51.805: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:52.979: INFO: Number of nodes with available pods: 0
Aug 11 09:41:52.979: INFO: Node iruya-worker is running more than one daemon pod
Aug 11 09:41:53.805: INFO: Number of nodes with available pods: 1
Aug 11 09:41:53.805: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-318, will wait for the garbage collector to delete the pods
Aug 11 09:41:53.866: INFO: Deleting DaemonSet.extensions daemon-set took: 4.319542ms
Aug 11 09:41:54.166: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.167816ms
Aug 11 09:41:57.768: INFO: Number of nodes with available pods: 0
Aug 11 09:41:57.768: INFO: Number of running nodes: 0, number of available pods: 0
Aug 11 09:41:57.770: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-318/daemonsets","resourceVersion":"4168809"},"items":null}

Aug 11 09:41:57.827: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-318/pods","resourceVersion":"4168809"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 11 09:41:57.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-318" for this suite.
Aug 11 09:42:03.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 11 09:42:03.963: INFO: namespace daemonsets-318 deletion completed in 6.082103775s

• [SLOW TEST:28.437 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSAug 11 09:42:03.963: INFO: Running AfterSuite actions on all nodes
Aug 11 09:42:03.963: INFO: Running AfterSuite actions on node 1
Aug 11 09:42:03.963: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 7002.955 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS