I0126 12:56:09.380112 8 e2e.go:243] Starting e2e run "858f472d-16d0-408e-84a4-6ce7a839b4ac" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580043367 - Will randomize all specs Will run 215 of 4412 specs Jan 26 12:56:09.731: INFO: >>> kubeConfig: /root/.kube/config Jan 26 12:56:09.738: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 26 12:56:09.792: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 26 12:56:09.855: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 26 12:56:09.855: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 26 12:56:09.855: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 26 12:56:09.877: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 26 12:56:09.877: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 26 12:56:09.877: INFO: e2e test version: v1.15.7 Jan 26 12:56:09.879: INFO: kube-apiserver version: v1.15.1 SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 12:56:09.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jan 26 12:56:10.048: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-576f257b-0da0-4707-bc1f-33cd8b5b601a STEP: Creating a pod to test consume secrets Jan 26 12:56:10.070: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3fb951f7-410a-4cb6-a9b2-06d8bfccca52" in namespace "projected-1384" to be "success or failure" Jan 26 12:56:10.078: INFO: Pod "pod-projected-secrets-3fb951f7-410a-4cb6-a9b2-06d8bfccca52": Phase="Pending", Reason="", readiness=false. Elapsed: 7.282914ms Jan 26 12:56:12.214: INFO: Pod "pod-projected-secrets-3fb951f7-410a-4cb6-a9b2-06d8bfccca52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143386928s Jan 26 12:56:14.233: INFO: Pod "pod-projected-secrets-3fb951f7-410a-4cb6-a9b2-06d8bfccca52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162269138s Jan 26 12:56:16.258: INFO: Pod "pod-projected-secrets-3fb951f7-410a-4cb6-a9b2-06d8bfccca52": Phase="Pending", Reason="", readiness=false. Elapsed: 6.187707105s Jan 26 12:56:18.276: INFO: Pod "pod-projected-secrets-3fb951f7-410a-4cb6-a9b2-06d8bfccca52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.205305296s STEP: Saw pod success Jan 26 12:56:18.276: INFO: Pod "pod-projected-secrets-3fb951f7-410a-4cb6-a9b2-06d8bfccca52" satisfied condition "success or failure" Jan 26 12:56:18.290: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3fb951f7-410a-4cb6-a9b2-06d8bfccca52 container secret-volume-test: STEP: delete the pod Jan 26 12:56:18.433: INFO: Waiting for pod pod-projected-secrets-3fb951f7-410a-4cb6-a9b2-06d8bfccca52 to disappear Jan 26 12:56:18.443: INFO: Pod pod-projected-secrets-3fb951f7-410a-4cb6-a9b2-06d8bfccca52 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 12:56:18.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1384" for this suite. Jan 26 12:56:24.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 12:56:24.624: INFO: namespace projected-1384 deletion completed in 6.16876174s • [SLOW TEST:14.745 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 12:56:24.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 26 12:56:24.696: INFO: Waiting up to 5m0s for pod "downwardapi-volume-856834a8-646b-4d2c-a18d-c1c864bd2f1c" in namespace "downward-api-7026" to be "success or failure" Jan 26 12:56:24.799: INFO: Pod "downwardapi-volume-856834a8-646b-4d2c-a18d-c1c864bd2f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 102.091708ms Jan 26 12:56:26.814: INFO: Pod "downwardapi-volume-856834a8-646b-4d2c-a18d-c1c864bd2f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117352978s Jan 26 12:56:28.885: INFO: Pod "downwardapi-volume-856834a8-646b-4d2c-a18d-c1c864bd2f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188419314s Jan 26 12:56:30.900: INFO: Pod "downwardapi-volume-856834a8-646b-4d2c-a18d-c1c864bd2f1c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203368268s Jan 26 12:56:32.913: INFO: Pod "downwardapi-volume-856834a8-646b-4d2c-a18d-c1c864bd2f1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.215988647s STEP: Saw pod success Jan 26 12:56:32.913: INFO: Pod "downwardapi-volume-856834a8-646b-4d2c-a18d-c1c864bd2f1c" satisfied condition "success or failure" Jan 26 12:56:32.918: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-856834a8-646b-4d2c-a18d-c1c864bd2f1c container client-container: STEP: delete the pod Jan 26 12:56:32.972: INFO: Waiting for pod downwardapi-volume-856834a8-646b-4d2c-a18d-c1c864bd2f1c to disappear Jan 26 12:56:33.104: INFO: Pod downwardapi-volume-856834a8-646b-4d2c-a18d-c1c864bd2f1c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 12:56:33.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7026" for this suite. Jan 26 12:56:39.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 12:56:39.252: INFO: namespace downward-api-7026 deletion completed in 6.142134675s • [SLOW TEST:14.628 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 12:56:39.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-bb20efe9-bafa-421f-9224-4ec1f58824d0 STEP: Creating a pod to test consume configMaps Jan 26 12:56:39.387: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-497a4117-665e-4da0-b368-ece2c4b66b93" in namespace "projected-5543" to be "success or failure" Jan 26 12:56:39.395: INFO: Pod "pod-projected-configmaps-497a4117-665e-4da0-b368-ece2c4b66b93": Phase="Pending", Reason="", readiness=false. Elapsed: 8.164733ms Jan 26 12:56:41.407: INFO: Pod "pod-projected-configmaps-497a4117-665e-4da0-b368-ece2c4b66b93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019822096s Jan 26 12:56:43.422: INFO: Pod "pod-projected-configmaps-497a4117-665e-4da0-b368-ece2c4b66b93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034511173s Jan 26 12:56:45.433: INFO: Pod "pod-projected-configmaps-497a4117-665e-4da0-b368-ece2c4b66b93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046081717s Jan 26 12:56:47.450: INFO: Pod "pod-projected-configmaps-497a4117-665e-4da0-b368-ece2c4b66b93": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06322704s Jan 26 12:56:49.460: INFO: Pod "pod-projected-configmaps-497a4117-665e-4da0-b368-ece2c4b66b93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073191086s STEP: Saw pod success Jan 26 12:56:49.461: INFO: Pod "pod-projected-configmaps-497a4117-665e-4da0-b368-ece2c4b66b93" satisfied condition "success or failure" Jan 26 12:56:49.468: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-497a4117-665e-4da0-b368-ece2c4b66b93 container projected-configmap-volume-test: STEP: delete the pod Jan 26 12:56:49.572: INFO: Waiting for pod pod-projected-configmaps-497a4117-665e-4da0-b368-ece2c4b66b93 to disappear Jan 26 12:56:49.578: INFO: Pod pod-projected-configmaps-497a4117-665e-4da0-b368-ece2c4b66b93 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 12:56:49.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5543" for this suite. Jan 26 12:56:55.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 12:56:55.810: INFO: namespace projected-5543 deletion completed in 6.226783189s • [SLOW TEST:16.558 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 12:56:55.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Jan 26 12:57:04.514: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1250 pod-service-account-8c555340-bef6-47ca-8d03-2b49531f1ae5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 26 12:57:07.386: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1250 pod-service-account-8c555340-bef6-47ca-8d03-2b49531f1ae5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 26 12:57:07.865: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1250 pod-service-account-8c555340-bef6-47ca-8d03-2b49531f1ae5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 12:57:08.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1250" for this suite. Jan 26 12:57:14.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 12:57:14.563: INFO: namespace svcaccounts-1250 deletion completed in 6.1350195s • [SLOW TEST:18.752 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 12:57:14.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-3ef517e3-262e-485a-895b-f86cc202cc4d STEP: Creating a pod to test consume configMaps Jan 26 12:57:14.803: INFO: Waiting up to 5m0s for pod "pod-configmaps-7114078d-ea62-4a2a-9f9f-6d042cb43fa6" in namespace "configmap-5365" to be "success or failure" Jan 26 12:57:14.812: INFO: Pod "pod-configmaps-7114078d-ea62-4a2a-9f9f-6d042cb43fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.087558ms Jan 26 12:57:16.825: INFO: Pod "pod-configmaps-7114078d-ea62-4a2a-9f9f-6d042cb43fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022167062s Jan 26 12:57:18.844: INFO: Pod "pod-configmaps-7114078d-ea62-4a2a-9f9f-6d042cb43fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041142818s Jan 26 12:57:20.894: INFO: Pod "pod-configmaps-7114078d-ea62-4a2a-9f9f-6d042cb43fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091324453s Jan 26 12:57:22.903: INFO: Pod "pod-configmaps-7114078d-ea62-4a2a-9f9f-6d042cb43fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100161296s Jan 26 12:57:24.914: INFO: Pod "pod-configmaps-7114078d-ea62-4a2a-9f9f-6d042cb43fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.110862152s Jan 26 12:57:26.922: INFO: Pod "pod-configmaps-7114078d-ea62-4a2a-9f9f-6d042cb43fa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.119166154s STEP: Saw pod success Jan 26 12:57:26.922: INFO: Pod "pod-configmaps-7114078d-ea62-4a2a-9f9f-6d042cb43fa6" satisfied condition "success or failure" Jan 26 12:57:26.927: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7114078d-ea62-4a2a-9f9f-6d042cb43fa6 container configmap-volume-test: STEP: delete the pod Jan 26 12:57:27.017: INFO: Waiting for pod pod-configmaps-7114078d-ea62-4a2a-9f9f-6d042cb43fa6 to disappear Jan 26 12:57:27.026: INFO: Pod pod-configmaps-7114078d-ea62-4a2a-9f9f-6d042cb43fa6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 12:57:27.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5365" for this suite. Jan 26 12:57:33.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 12:57:33.228: INFO: namespace configmap-5365 deletion completed in 6.196371032s • [SLOW TEST:18.665 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 12:57:33.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 26 12:57:57.572: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7043 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 12:57:57.572: INFO: >>> kubeConfig: /root/.kube/config I0126 12:57:57.714213 8 log.go:172] (0xc000652370) (0xc001fe48c0) Create stream I0126 12:57:57.714366 8 log.go:172] (0xc000652370) (0xc001fe48c0) Stream added, broadcasting: 1 I0126 12:57:57.720857 8 log.go:172] (0xc000652370) Reply frame received for 1 I0126 12:57:57.720904 8 log.go:172] (0xc000652370) (0xc001fe4960) Create stream I0126 12:57:57.720915 8 log.go:172] (0xc000652370) (0xc001fe4960) Stream added, broadcasting: 3 I0126 12:57:57.722152 8 log.go:172] (0xc000652370) Reply frame received for 3 I0126 12:57:57.722189 8 log.go:172] (0xc000652370) (0xc000d6c0a0) Create stream I0126 12:57:57.722199 8 log.go:172] (0xc000652370) (0xc000d6c0a0) Stream added, broadcasting: 5 I0126 12:57:57.723964 8 log.go:172] (0xc000652370) Reply frame received for 5 I0126 12:57:57.867996 8 log.go:172] (0xc000652370) Data frame received for 3 I0126 12:57:57.868170 8 log.go:172] (0xc001fe4960) (3) Data frame handling I0126 12:57:57.868227 8 log.go:172] (0xc001fe4960) (3) Data frame sent I0126 12:57:58.029167 8 log.go:172] (0xc000652370) (0xc000d6c0a0) Stream removed, broadcasting: 5 I0126 12:57:58.029461 8 log.go:172] (0xc000652370) (0xc001fe4960) Stream removed, broadcasting: 3 I0126 12:57:58.029580 8 log.go:172] (0xc000652370) Data frame received for 1 I0126 12:57:58.029626 8 log.go:172] (0xc001fe48c0) (1) Data frame handling I0126 12:57:58.029732 8 log.go:172] (0xc001fe48c0) (1) Data frame sent I0126 12:57:58.029759 8 log.go:172] (0xc000652370) (0xc001fe48c0) Stream removed, broadcasting: 1 I0126 12:57:58.029800 8 log.go:172] (0xc000652370) Go away received I0126 12:57:58.031087 8 log.go:172] (0xc000652370) (0xc001fe48c0) Stream removed, broadcasting: 1 I0126 12:57:58.031201 8 log.go:172] (0xc000652370) (0xc001fe4960) Stream removed, broadcasting: 3 I0126 12:57:58.031217 8 log.go:172] (0xc000652370) (0xc000d6c0a0) Stream removed, broadcasting: 5 Jan 26 12:57:58.031: INFO: Exec stderr: "" Jan 26 12:57:58.031: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7043 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 12:57:58.031: INFO: >>> kubeConfig: /root/.kube/config I0126 12:57:58.106949 8 log.go:172] (0xc0018944d0) (0xc001d3db80) Create stream I0126 12:57:58.107071 8 log.go:172] (0xc0018944d0) (0xc001d3db80) Stream added, broadcasting: 1 I0126 12:57:58.123233 8 log.go:172] (0xc0018944d0) Reply frame received for 1 I0126 12:57:58.123424 8 log.go:172] (0xc0018944d0) (0xc001821a40) Create stream I0126 12:57:58.123452 8 log.go:172] (0xc0018944d0) (0xc001821a40) Stream added, broadcasting: 3 I0126 12:57:58.125356 8 log.go:172] (0xc0018944d0) Reply frame received for 3 I0126 12:57:58.125408 8 log.go:172] (0xc0018944d0) (0xc001e1d220) Create stream I0126 12:57:58.125431 8 log.go:172] (0xc0018944d0) (0xc001e1d220) Stream added, broadcasting: 5 I0126 12:57:58.127231 8 log.go:172] (0xc0018944d0) Reply frame received for 5 I0126 12:57:58.232169 8 log.go:172] (0xc0018944d0) Data frame received for 3 I0126 12:57:58.232472 8 log.go:172] (0xc001821a40) (3) Data frame handling I0126 12:57:58.232541 8 log.go:172] (0xc001821a40) (3) Data frame sent I0126 12:57:58.375720 8 log.go:172] (0xc0018944d0) (0xc001821a40) Stream removed, broadcasting: 3 I0126 12:57:58.375959 8 log.go:172] (0xc0018944d0) Data frame received for 1 I0126 12:57:58.376027 8 log.go:172] (0xc0018944d0) (0xc001e1d220) Stream removed, broadcasting: 5 I0126 12:57:58.376141 8 log.go:172] (0xc001d3db80) (1) Data frame handling I0126 12:57:58.376181 8 log.go:172] (0xc001d3db80) (1) Data frame sent I0126 12:57:58.376192 8 log.go:172] (0xc0018944d0) (0xc001d3db80) Stream removed, broadcasting: 1 I0126 12:57:58.376223 8 log.go:172] (0xc0018944d0) Go away received I0126 12:57:58.376764 8 log.go:172] (0xc0018944d0) (0xc001d3db80) Stream removed, broadcasting: 1 I0126 12:57:58.376816 8 log.go:172] (0xc0018944d0) (0xc001821a40) Stream removed, broadcasting: 3 I0126 12:57:58.376830 8 log.go:172] (0xc0018944d0) (0xc001e1d220) Stream removed, broadcasting: 5 Jan 26 12:57:58.376: INFO: Exec stderr: "" Jan 26 12:57:58.376: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7043 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 12:57:58.377: INFO: >>> kubeConfig: /root/.kube/config I0126 12:57:58.436880 8 log.go:172] (0xc002a708f0) (0xc000d6c5a0) Create stream I0126 12:57:58.437019 8 log.go:172] (0xc002a708f0) (0xc000d6c5a0) Stream added, broadcasting: 1 I0126 12:57:58.445091 8 log.go:172] (0xc002a708f0) Reply frame received for 1 I0126 12:57:58.445245 8 log.go:172] (0xc002a708f0) (0xc000d6c640) Create stream I0126 12:57:58.445261 8 log.go:172] (0xc002a708f0) (0xc000d6c640) Stream added, broadcasting: 3 I0126 12:57:58.448048 8 log.go:172] (0xc002a708f0) Reply frame received for 3 I0126 12:57:58.448096 8 log.go:172] (0xc002a708f0) (0xc001fe4a00) Create stream I0126 12:57:58.448110 8 log.go:172] (0xc002a708f0) (0xc001fe4a00) Stream added, broadcasting: 5 I0126 12:57:58.449743 8 log.go:172] (0xc002a708f0) Reply frame received for 5 I0126 12:57:58.785166 8 log.go:172] (0xc002a708f0) Data frame received for 3 I0126 12:57:58.785341 8 log.go:172] (0xc000d6c640) (3) Data frame handling I0126 12:57:58.785410 8 log.go:172] (0xc000d6c640) (3) Data frame sent I0126 12:57:58.911034 8 log.go:172] (0xc002a708f0) Data frame received for 1 I0126 12:57:58.911240 8 log.go:172] (0xc002a708f0) (0xc000d6c640) Stream removed, broadcasting: 3 I0126 12:57:58.911341 8 log.go:172] (0xc000d6c5a0) (1) Data frame handling I0126 12:57:58.911375 8 log.go:172] (0xc000d6c5a0) (1) Data frame sent I0126 12:57:58.911408 8 log.go:172] (0xc002a708f0) (0xc001fe4a00) Stream removed, broadcasting: 5 I0126 12:57:58.911466 8 log.go:172] (0xc002a708f0) (0xc000d6c5a0) Stream removed, broadcasting: 1 I0126 12:57:58.911492 8 log.go:172] (0xc002a708f0) Go away received I0126 12:57:58.911751 8 log.go:172] (0xc002a708f0) (0xc000d6c5a0) Stream removed, broadcasting: 1 I0126 12:57:58.911762 8 log.go:172] (0xc002a708f0) (0xc000d6c640) Stream removed, broadcasting: 3 I0126 12:57:58.911768 8 log.go:172] (0xc002a708f0) (0xc001fe4a00) Stream removed, broadcasting: 5 Jan 26 12:57:58.911: INFO: Exec stderr: "" Jan 26 12:57:58.911: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7043 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 12:57:58.911: INFO: >>> kubeConfig: /root/.kube/config I0126 12:57:58.978983 8 log.go:172] (0xc002ee6bb0) (0xc000d6cbe0) Create stream I0126 12:57:58.979039 8 log.go:172] (0xc002ee6bb0) (0xc000d6cbe0) Stream added, broadcasting: 1 I0126 12:57:58.984818 8 log.go:172] (0xc002ee6bb0) Reply frame received for 1 I0126 12:57:58.984865 8 log.go:172] (0xc002ee6bb0) (0xc001d3dc20) Create stream I0126 12:57:58.984880 8 log.go:172] (0xc002ee6bb0) (0xc001d3dc20) Stream added, broadcasting: 3 I0126 12:57:58.986640 8 log.go:172] (0xc002ee6bb0) Reply frame received for 3 I0126 12:57:58.986683 8 log.go:172] (0xc002ee6bb0) (0xc001e1d2c0) Create stream I0126 12:57:58.986710 8 log.go:172] (0xc002ee6bb0) (0xc001e1d2c0) Stream added, broadcasting: 5 I0126 12:57:58.989443 8 log.go:172] (0xc002ee6bb0) Reply frame received for 5 I0126 12:57:59.132417 8 log.go:172] (0xc002ee6bb0) Data frame received for 3 I0126 12:57:59.132528 8 log.go:172] (0xc001d3dc20) (3) Data frame handling I0126 12:57:59.132562 8 log.go:172] (0xc001d3dc20) (3) Data frame sent I0126 12:57:59.316340 8 log.go:172] (0xc002ee6bb0) Data frame received for 1 I0126 12:57:59.316438 8 log.go:172] (0xc002ee6bb0) (0xc001e1d2c0) Stream removed, broadcasting: 5 I0126 12:57:59.316505 8 log.go:172] (0xc000d6cbe0) (1) Data frame handling I0126 12:57:59.316527 8 log.go:172] (0xc000d6cbe0) (1) Data frame sent I0126 12:57:59.316574 8 log.go:172] (0xc002ee6bb0) (0xc001d3dc20) Stream removed, broadcasting: 3 I0126 12:57:59.316606 8 log.go:172] (0xc002ee6bb0) (0xc000d6cbe0) Stream removed, broadcasting: 1 I0126 12:57:59.316635 8 log.go:172] (0xc002ee6bb0) Go away received I0126 12:57:59.316819 8 log.go:172] (0xc002ee6bb0) (0xc000d6cbe0) Stream removed, broadcasting: 1 I0126 12:57:59.316831 8 log.go:172] (0xc002ee6bb0) (0xc001d3dc20) Stream removed, broadcasting: 3 I0126 12:57:59.316841 8 log.go:172] (0xc002ee6bb0) (0xc001e1d2c0) Stream removed, broadcasting: 5 Jan 26 12:57:59.316: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 26 12:57:59.317: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7043 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 12:57:59.317: INFO: >>> kubeConfig: /root/.kube/config I0126 12:57:59.369430 8 log.go:172] (0xc002bced10) (0xc001e1d720) Create stream I0126 12:57:59.369481 8 log.go:172] (0xc002bced10) (0xc001e1d720) Stream added, broadcasting: 1 I0126 12:57:59.372499 8 log.go:172] (0xc002bced10) Reply frame received for 1 I0126 12:57:59.372524 8 log.go:172] (0xc002bced10) (0xc001821c20) Create stream I0126 12:57:59.372533 8 log.go:172] (0xc002bced10) (0xc001821c20) Stream added, broadcasting: 3 I0126 12:57:59.373333 8 log.go:172] (0xc002bced10) Reply frame received for 3 I0126 12:57:59.373353 8 log.go:172] (0xc002bced10) (0xc001e1d7c0) Create stream I0126 12:57:59.373358 8 log.go:172] (0xc002bced10) (0xc001e1d7c0) Stream added, broadcasting: 5 I0126 12:57:59.374368 8 log.go:172] (0xc002bced10) Reply frame received for 5 I0126 12:57:59.470841 8 log.go:172] (0xc002bced10) Data frame received for 3 I0126 12:57:59.470984 8 log.go:172] (0xc001821c20) (3) Data frame handling I0126 12:57:59.471039 8 log.go:172] (0xc001821c20) (3) Data frame sent I0126 12:57:59.579955 8 log.go:172] (0xc002bced10) Data frame received for 1 I0126 12:57:59.580093 8 log.go:172] (0xc002bced10) (0xc001e1d7c0) Stream removed, broadcasting: 5 I0126 12:57:59.580196 8 log.go:172] (0xc001e1d720) (1) Data frame handling I0126 12:57:59.580240 8 log.go:172] (0xc001e1d720) (1) Data frame sent I0126 12:57:59.580264 8 log.go:172] (0xc002bced10) (0xc001821c20) Stream removed, broadcasting: 3 I0126 12:57:59.580322 8 log.go:172] (0xc002bced10) (0xc001e1d720) Stream removed, broadcasting: 1 I0126 12:57:59.580355 8 log.go:172] (0xc002bced10) Go away received I0126 12:57:59.580980 8 log.go:172] (0xc002bced10) (0xc001e1d720) Stream removed, broadcasting: 1 I0126 12:57:59.581018 8 log.go:172] (0xc002bced10) (0xc001821c20) Stream removed, broadcasting: 3 I0126 12:57:59.581041 8 log.go:172] (0xc002bced10) (0xc001e1d7c0) Stream removed, broadcasting: 5 Jan 26 12:57:59.581: INFO: Exec stderr: "" Jan 26 12:57:59.581: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7043 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 12:57:59.581: INFO: >>> kubeConfig: /root/.kube/config I0126 12:57:59.627352 8 log.go:172] (0xc002d1ca50) (0xc001fe4e60) Create stream I0126 12:57:59.627380 8 log.go:172] (0xc002d1ca50) (0xc001fe4e60) Stream added, broadcasting: 1 I0126 12:57:59.633306 8 log.go:172] (0xc002d1ca50) Reply frame received for 1 I0126 12:57:59.633366 8 log.go:172] (0xc002d1ca50) (0xc0009861e0) Create stream I0126 12:57:59.633374 8 log.go:172] (0xc002d1ca50) (0xc0009861e0) Stream added, broadcasting: 3 I0126 12:57:59.634451 8 log.go:172] (0xc002d1ca50) Reply frame received for 3 I0126 12:57:59.634481 8 log.go:172] (0xc002d1ca50) (0xc000d6cc80) Create stream I0126 12:57:59.634494 8 log.go:172] (0xc002d1ca50) (0xc000d6cc80) Stream added, broadcasting: 5 I0126 12:57:59.636219 8 log.go:172] (0xc002d1ca50) Reply frame received for 5 I0126 12:57:59.752689 8 log.go:172] (0xc002d1ca50) Data frame received for 3 I0126 12:57:59.752774 8 log.go:172] (0xc0009861e0) (3) Data frame handling I0126 12:57:59.752799 8 log.go:172] (0xc0009861e0) (3) Data frame sent I0126 12:57:59.882813 8 log.go:172] (0xc002d1ca50) Data frame received for 1 I0126 12:57:59.882948 8 log.go:172] (0xc002d1ca50) (0xc0009861e0) Stream removed, broadcasting: 3 I0126 12:57:59.883056 8 log.go:172] (0xc001fe4e60) (1) Data frame handling I0126 12:57:59.883116 8 log.go:172] (0xc001fe4e60) (1) Data frame sent I0126 12:57:59.883504 8 log.go:172] (0xc002d1ca50) (0xc000d6cc80) Stream removed, broadcasting: 5 I0126 12:57:59.883663 8 log.go:172] (0xc002d1ca50) (0xc001fe4e60) Stream removed, broadcasting: 1 I0126 12:57:59.883711 8 log.go:172] (0xc002d1ca50) Go away received I0126 12:57:59.883834 8 log.go:172] (0xc002d1ca50) (0xc001fe4e60) Stream removed, broadcasting: 1 I0126 12:57:59.883878 8 log.go:172] (0xc002d1ca50) (0xc0009861e0) Stream removed, broadcasting: 3 I0126 12:57:59.883887 8 log.go:172] (0xc002d1ca50) (0xc000d6cc80) Stream removed, broadcasting: 5 Jan 26 12:57:59.883: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 26 12:57:59.884: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7043 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 12:57:59.884: INFO: >>> kubeConfig: /root/.kube/config I0126 12:57:59.951769 8 log.go:172] (0xc002bcf810) (0xc001e1db80) Create stream I0126 12:57:59.952001 8 log.go:172] (0xc002bcf810) (0xc001e1db80) Stream added, broadcasting: 1 I0126 12:57:59.964203 8 log.go:172] (0xc002bcf810) Reply frame received for 1 I0126 12:57:59.964278 8 log.go:172] (0xc002bcf810) (0xc001821cc0) Create stream I0126 12:57:59.964291 8 log.go:172] (0xc002bcf810) (0xc001821cc0) Stream added, broadcasting: 3 I0126 12:57:59.966739 8 log.go:172] (0xc002bcf810) Reply frame received for 3 I0126 12:57:59.966804 8 log.go:172] (0xc002bcf810) (0xc001e1dc20) Create stream I0126 12:57:59.966813 8 log.go:172] (0xc002bcf810) (0xc001e1dc20) Stream added, broadcasting: 5 I0126 12:57:59.968614 8 log.go:172] (0xc002bcf810) Reply frame received for 5 I0126 12:58:00.084934 8 log.go:172] (0xc002bcf810) Data frame received for 3 I0126 12:58:00.085109 8 log.go:172] (0xc001821cc0) (3) Data frame handling I0126 12:58:00.085140 8 log.go:172] (0xc001821cc0) (3) Data frame sent I0126 12:58:00.240228 8 log.go:172] (0xc002bcf810) (0xc001e1dc20) Stream removed, broadcasting: 5 I0126 12:58:00.240479 8 log.go:172] (0xc002bcf810) Data frame received for 1 I0126 12:58:00.240526 8 log.go:172] (0xc002bcf810) (0xc001821cc0) Stream removed, broadcasting: 3 I0126 12:58:00.240677 8 log.go:172] (0xc001e1db80) (1) Data frame handling I0126 12:58:00.240732 8 log.go:172] (0xc001e1db80) (1) Data frame sent I0126 12:58:00.240778 8 log.go:172] (0xc002bcf810) (0xc001e1db80) Stream removed, broadcasting: 1 I0126 12:58:00.240840 8 log.go:172] (0xc002bcf810) Go away received I0126 12:58:00.241582 8 log.go:172] (0xc002bcf810) (0xc001e1db80) Stream removed, broadcasting: 1 I0126 12:58:00.241679 8 log.go:172] (0xc002bcf810) (0xc001821cc0) Stream removed, broadcasting: 3 I0126 12:58:00.241722 8 log.go:172] (0xc002bcf810) (0xc001e1dc20) Stream removed, broadcasting: 5 Jan 26 12:58:00.241: INFO: Exec stderr: "" Jan 26 12:58:00.241: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7043 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 12:58:00.242: INFO: >>> kubeConfig: /root/.kube/config I0126 12:58:00.314139 8 log.go:172] (0xc001df04d0) (0xc000118820) Create stream I0126 12:58:00.314646 8 log.go:172] (0xc001df04d0) (0xc000118820) Stream added, broadcasting: 1 I0126 12:58:00.326489 8 log.go:172] (0xc001df04d0) Reply frame received for 1 I0126 12:58:00.326565 8 log.go:172] (0xc001df04d0) (0xc001fe4f00) Create stream I0126 12:58:00.326579 8 log.go:172] (0xc001df04d0) (0xc001fe4f00) Stream added, broadcasting: 3 I0126 12:58:00.328458 8 log.go:172] (0xc001df04d0) Reply frame received for 3 I0126 12:58:00.328526 8 log.go:172] (0xc001df04d0) (0xc000118c80) Create stream I0126 12:58:00.328537 8 log.go:172] (0xc001df04d0) (0xc000118c80) Stream added, broadcasting: 5 I0126 12:58:00.329697 8 log.go:172] (0xc001df04d0) Reply frame received for 5 I0126 12:58:00.446237 8 log.go:172] (0xc001df04d0) Data frame received for 3 I0126 12:58:00.446371 8 log.go:172] (0xc001fe4f00) (3) Data frame handling I0126 12:58:00.446441 8 log.go:172] (0xc001fe4f00) (3) Data frame sent I0126 12:58:00.647378 8 log.go:172] (0xc001df04d0) (0xc000118c80) Stream removed, broadcasting: 5 I0126 12:58:00.647581 8 log.go:172] (0xc001df04d0) Data frame received for 1 I0126 12:58:00.647611 8 log.go:172] (0xc001df04d0) (0xc001fe4f00) Stream removed, broadcasting: 3 I0126 12:58:00.647790 8 log.go:172] (0xc000118820) (1) Data frame handling I0126 12:58:00.647891 8 log.go:172] (0xc000118820) (1) Data frame sent I0126 12:58:00.647976 8 log.go:172] (0xc001df04d0) (0xc000118820) Stream removed, broadcasting: 1 I0126 12:58:00.648023 8 log.go:172] (0xc001df04d0) Go away received I0126 12:58:00.648335 8 log.go:172] (0xc001df04d0) (0xc000118820) Stream removed, broadcasting: 1 I0126 12:58:00.648358 8 log.go:172] (0xc001df04d0) (0xc001fe4f00) Stream removed, broadcasting: 3 I0126 12:58:00.648371 8 log.go:172] (0xc001df04d0) (0xc000118c80) Stream removed, broadcasting: 5 Jan 26 12:58:00.648: INFO: Exec stderr: "" Jan 26 12:58:00.648: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7043 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 12:58:00.648: INFO: >>> kubeConfig: /root/.kube/config I0126 12:58:00.719136 8 log.go:172] (0xc00263e210) (0xc000d6d360) Create stream I0126 12:58:00.719187 8 log.go:172] (0xc00263e210) (0xc000d6d360) Stream added, broadcasting: 1 I0126 12:58:00.724273 8 log.go:172] (0xc00263e210) Reply frame received for 1 I0126 12:58:00.724313 8 log.go:172] (0xc00263e210) (0xc001d3dcc0) Create stream I0126 12:58:00.724332 8 log.go:172] (0xc00263e210) (0xc001d3dcc0) Stream added, broadcasting: 3 I0126 12:58:00.725386 8 log.go:172] (0xc00263e210) Reply frame received for 3 I0126 12:58:00.725404 8 log.go:172] (0xc00263e210) (0xc000118d20) Create stream I0126 12:58:00.725412 8 log.go:172] (0xc00263e210) (0xc000118d20) Stream added, broadcasting: 5 I0126 12:58:00.726381 8 log.go:172] (0xc00263e210) Reply frame received for 5 I0126 12:58:00.801128 8 log.go:172] (0xc00263e210) Data frame received for 3 I0126 12:58:00.801182 8 log.go:172] (0xc001d3dcc0) (3) Data frame handling I0126 12:58:00.801205 8 log.go:172] (0xc001d3dcc0) (3) Data frame sent I0126 12:58:00.947231 8 log.go:172] (0xc00263e210) Data frame received for 1 I0126 12:58:00.947736 8 log.go:172] (0xc00263e210) (0xc000118d20) Stream removed, broadcasting: 5 I0126 12:58:00.948037 8 log.go:172] (0xc000d6d360) (1) Data frame handling I0126 12:58:00.948118 8 log.go:172] (0xc00263e210) (0xc001d3dcc0) Stream removed, broadcasting: 3 I0126 12:58:00.948192 8 log.go:172] (0xc000d6d360) (1) Data frame sent I0126 12:58:00.948238 8 log.go:172] (0xc00263e210) (0xc000d6d360) Stream removed, broadcasting: 1 I0126 12:58:00.948302 8 log.go:172] (0xc00263e210) Go away received I0126 12:58:00.948633 8 log.go:172] (0xc00263e210) (0xc000d6d360) Stream removed, broadcasting: 1 I0126 12:58:00.948653 8 log.go:172] (0xc00263e210) (0xc001d3dcc0) Stream removed, broadcasting: 3 I0126 12:58:00.948672 8 log.go:172] (0xc00263e210) (0xc000118d20) Stream removed, broadcasting: 5 Jan 26 12:58:00.948: INFO: Exec stderr: "" Jan 26 12:58:00.948: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7043 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 12:58:00.948: INFO: >>> kubeConfig: /root/.kube/config I0126 12:58:01.005393 8 log.go:172] (0xc001df13f0) (0xc000119900) Create stream I0126 12:58:01.005463 8 log.go:172] (0xc001df13f0) (0xc000119900) Stream added, broadcasting: 1 I0126 12:58:01.012232 8 log.go:172] (0xc001df13f0) Reply frame received for 1 I0126 12:58:01.012276 8 log.go:172] (0xc001df13f0) (0xc001fe5040) Create stream I0126 12:58:01.012284 8 log.go:172] (0xc001df13f0) (0xc001fe5040) Stream added, broadcasting: 3 I0126 12:58:01.013804 8 log.go:172] (0xc001df13f0) Reply frame received for 3 I0126 12:58:01.013822 8 log.go:172] (0xc001df13f0) (0xc001fe50e0) Create stream I0126 12:58:01.013832 8 log.go:172] (0xc001df13f0) (0xc001fe50e0) Stream added, broadcasting: 5 I0126 12:58:01.017470 8 log.go:172] (0xc001df13f0) Reply frame received for 5 I0126 12:58:01.170392 8 log.go:172] (0xc001df13f0) Data frame received for 3 I0126 12:58:01.170507 8 log.go:172] (0xc001fe5040) (3) Data frame handling I0126 12:58:01.170539 8 log.go:172] (0xc001fe5040) (3) Data frame sent I0126 12:58:01.274015 8 log.go:172] (0xc001df13f0) Data frame received for 1 I0126 12:58:01.274119 8 log.go:172] (0xc001df13f0) (0xc001fe50e0) Stream removed, broadcasting: 5 I0126 12:58:01.274184 8 log.go:172] (0xc000119900) (1) Data frame handling I0126 12:58:01.274204 8 log.go:172] (0xc000119900) (1) Data frame sent I0126 12:58:01.274232 8 log.go:172] (0xc001df13f0) (0xc001fe5040) Stream removed, broadcasting: 3 I0126 12:58:01.274281 8 log.go:172] (0xc001df13f0) (0xc000119900) Stream removed, broadcasting: 1 I0126 12:58:01.274331 8 log.go:172] (0xc001df13f0) Go away received I0126 12:58:01.274529 8 log.go:172] (0xc001df13f0) (0xc000119900) Stream removed, broadcasting: 1 I0126 12:58:01.274560 8 log.go:172] (0xc001df13f0) (0xc001fe5040) Stream removed, broadcasting: 3 I0126 12:58:01.274568 8 log.go:172] (0xc001df13f0) (0xc001fe50e0) Stream removed, broadcasting: 5 Jan 26 12:58:01.274: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 12:58:01.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7043" for this suite. Jan 26 12:58:49.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 12:58:49.417: INFO: namespace e2e-kubelet-etc-hosts-7043 deletion completed in 48.136140404s • [SLOW TEST:76.189 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 12:58:49.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 26 12:58:59.795: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 12:58:59.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9059" for this suite. Jan 26 12:59:05.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 12:59:06.016: INFO: namespace container-runtime-9059 deletion completed in 6.170387515s • [SLOW TEST:16.598 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 12:59:06.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-5631/secret-test-5e641841-1a85-4228-b4e0-92d9afcaa6cf STEP: Creating a pod to test consume secrets Jan 26 12:59:06.169: INFO: Waiting up to 5m0s for pod "pod-configmaps-14cec9ce-0106-427f-a82b-7c1271db2a40" in namespace "secrets-5631" to be "success or failure" Jan 26 12:59:06.179: INFO: Pod "pod-configmaps-14cec9ce-0106-427f-a82b-7c1271db2a40": Phase="Pending", Reason="", readiness=false. Elapsed: 10.113478ms Jan 26 12:59:08.815: INFO: Pod "pod-configmaps-14cec9ce-0106-427f-a82b-7c1271db2a40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.645386979s Jan 26 12:59:10.823: INFO: Pod "pod-configmaps-14cec9ce-0106-427f-a82b-7c1271db2a40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.653454198s Jan 26 12:59:12.849: INFO: Pod "pod-configmaps-14cec9ce-0106-427f-a82b-7c1271db2a40": Phase="Pending", Reason="", readiness=false. Elapsed: 6.680028903s Jan 26 12:59:14.872: INFO: Pod "pod-configmaps-14cec9ce-0106-427f-a82b-7c1271db2a40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.702231888s STEP: Saw pod success Jan 26 12:59:14.872: INFO: Pod "pod-configmaps-14cec9ce-0106-427f-a82b-7c1271db2a40" satisfied condition "success or failure" Jan 26 12:59:14.877: INFO: Trying to get logs from node iruya-node pod pod-configmaps-14cec9ce-0106-427f-a82b-7c1271db2a40 container env-test: STEP: delete the pod Jan 26 12:59:15.048: INFO: Waiting for pod pod-configmaps-14cec9ce-0106-427f-a82b-7c1271db2a40 to disappear Jan 26 12:59:15.053: INFO: Pod pod-configmaps-14cec9ce-0106-427f-a82b-7c1271db2a40 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 12:59:15.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5631" for this suite. Jan 26 12:59:21.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 12:59:21.273: INFO: namespace secrets-5631 deletion completed in 6.211942758s • [SLOW TEST:15.257 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 12:59:21.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Jan 26 12:59:21.379: INFO: Waiting up to 5m0s for pod "client-containers-c514b87a-abd6-4d3b-80fb-f4d175a3af2f" in namespace "containers-7087" to be "success or failure" Jan 26 12:59:21.388: INFO: Pod "client-containers-c514b87a-abd6-4d3b-80fb-f4d175a3af2f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.679547ms Jan 26 12:59:23.400: INFO: Pod "client-containers-c514b87a-abd6-4d3b-80fb-f4d175a3af2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019460225s Jan 26 12:59:25.411: INFO: Pod "client-containers-c514b87a-abd6-4d3b-80fb-f4d175a3af2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030507698s Jan 26 12:59:27.423: INFO: Pod "client-containers-c514b87a-abd6-4d3b-80fb-f4d175a3af2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042123592s Jan 26 12:59:29.432: INFO: Pod "client-containers-c514b87a-abd6-4d3b-80fb-f4d175a3af2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051728457s STEP: Saw pod success Jan 26 12:59:29.433: INFO: Pod "client-containers-c514b87a-abd6-4d3b-80fb-f4d175a3af2f" satisfied condition "success or failure" Jan 26 12:59:29.439: INFO: Trying to get logs from node iruya-node pod client-containers-c514b87a-abd6-4d3b-80fb-f4d175a3af2f container test-container: STEP: delete the pod Jan 26 12:59:29.502: INFO: Waiting for pod client-containers-c514b87a-abd6-4d3b-80fb-f4d175a3af2f to disappear Jan 26 12:59:29.527: INFO: Pod client-containers-c514b87a-abd6-4d3b-80fb-f4d175a3af2f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 12:59:29.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7087" for this suite. Jan 26 12:59:35.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 12:59:35.692: INFO: namespace containers-7087 deletion completed in 6.156632324s • [SLOW TEST:14.417 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 12:59:35.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-d634e789-b952-4827-802a-6c112618c42f STEP: Creating a pod to test consume secrets Jan 26 12:59:35.805: INFO: Waiting up to 5m0s for pod "pod-secrets-10df98e0-1146-4d87-bbce-afbff795f0bd" in namespace "secrets-8719" to be "success or failure" Jan 26 12:59:35.814: INFO: Pod "pod-secrets-10df98e0-1146-4d87-bbce-afbff795f0bd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.15482ms Jan 26 12:59:37.843: INFO: Pod "pod-secrets-10df98e0-1146-4d87-bbce-afbff795f0bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037172225s Jan 26 12:59:39.855: INFO: Pod "pod-secrets-10df98e0-1146-4d87-bbce-afbff795f0bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049253682s Jan 26 12:59:41.867: INFO: Pod "pod-secrets-10df98e0-1146-4d87-bbce-afbff795f0bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061221935s Jan 26 12:59:43.884: INFO: Pod "pod-secrets-10df98e0-1146-4d87-bbce-afbff795f0bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078550898s STEP: Saw pod success Jan 26 12:59:43.885: INFO: Pod "pod-secrets-10df98e0-1146-4d87-bbce-afbff795f0bd" satisfied condition "success or failure" Jan 26 12:59:43.892: INFO: Trying to get logs from node iruya-node pod pod-secrets-10df98e0-1146-4d87-bbce-afbff795f0bd container secret-volume-test: STEP: delete the pod Jan 26 12:59:43.987: INFO: Waiting for pod pod-secrets-10df98e0-1146-4d87-bbce-afbff795f0bd to disappear Jan 26 12:59:44.022: INFO: Pod pod-secrets-10df98e0-1146-4d87-bbce-afbff795f0bd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 12:59:44.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8719" for this suite. Jan 26 12:59:50.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 12:59:50.302: INFO: namespace secrets-8719 deletion completed in 6.267445553s • [SLOW TEST:14.610 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 12:59:50.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-3d0a755e-0c55-4d6a-8d91-6214409f7712 in namespace container-probe-562 Jan 26 12:59:58.480: INFO: Started pod test-webserver-3d0a755e-0c55-4d6a-8d91-6214409f7712 in namespace container-probe-562 STEP: checking the pod's current state and verifying that restartCount is present Jan 26 12:59:58.484: INFO: Initial restart count of pod test-webserver-3d0a755e-0c55-4d6a-8d91-6214409f7712 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:04:00.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-562" for this suite. Jan 26 13:04:06.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:04:06.621: INFO: namespace container-probe-562 deletion completed in 6.189342762s • [SLOW TEST:256.319 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:04:06.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0126 13:04:17.158524 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 26 13:04:17.158: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:04:17.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7439" for this suite. Jan 26 13:04:23.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:04:23.628: INFO: namespace gc-7439 deletion completed in 6.463359625s • [SLOW TEST:17.006 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:04:23.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jan 26 13:04:23.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6818' Jan 26 13:04:24.157: INFO: stderr: "" Jan 26 13:04:24.157: INFO: stdout: "pod/pause created\n" Jan 26 13:04:24.157: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 26 13:04:24.157: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6818" to be "running and ready" Jan 26 13:04:24.168: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.382654ms Jan 26 13:04:26.179: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021867383s Jan 26 13:04:28.188: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030557582s Jan 26 13:04:30.197: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039796386s Jan 26 13:04:32.205: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047348064s Jan 26 13:04:34.215: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.057737673s Jan 26 13:04:34.215: INFO: Pod "pause" satisfied condition "running and ready" Jan 26 13:04:34.215: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jan 26 13:04:34.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6818' Jan 26 13:04:34.364: INFO: stderr: "" Jan 26 13:04:34.364: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 26 13:04:34.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6818' Jan 26 13:04:34.494: INFO: stderr: "" Jan 26 13:04:34.494: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 26 13:04:34.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6818' Jan 26 13:04:34.652: INFO: stderr: "" Jan 26 13:04:34.652: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 26 13:04:34.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6818' Jan 26 13:04:34.783: INFO: stderr: "" Jan 26 13:04:34.783: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jan 26 13:04:34.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6818' Jan 26 13:04:34.910: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 26 13:04:34.910: INFO: stdout: "pod \"pause\" force deleted\n" Jan 26 13:04:34.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6818' Jan 26 13:04:35.229: INFO: stderr: "No resources found.\n" Jan 26 13:04:35.230: INFO: stdout: "" Jan 26 13:04:35.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6818 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 26 13:04:35.504: INFO: stderr: "" Jan 26 13:04:35.504: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:04:35.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6818" for this suite. Jan 26 13:04:41.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:04:41.645: INFO: namespace kubectl-6818 deletion completed in 6.125913566s • [SLOW TEST:18.015 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:04:41.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 26 13:04:41.797: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ceca5301-4a6c-49aa-857d-b20632463669" in namespace "projected-1855" to be "success or failure" Jan 26 13:04:41.832: INFO: Pod "downwardapi-volume-ceca5301-4a6c-49aa-857d-b20632463669": Phase="Pending", Reason="", readiness=false. Elapsed: 34.783355ms Jan 26 13:04:43.837: INFO: Pod "downwardapi-volume-ceca5301-4a6c-49aa-857d-b20632463669": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040278389s Jan 26 13:04:45.846: INFO: Pod "downwardapi-volume-ceca5301-4a6c-49aa-857d-b20632463669": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048647371s Jan 26 13:04:47.863: INFO: Pod "downwardapi-volume-ceca5301-4a6c-49aa-857d-b20632463669": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06649449s Jan 26 13:04:49.876: INFO: Pod "downwardapi-volume-ceca5301-4a6c-49aa-857d-b20632463669": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079387506s Jan 26 13:04:51.887: INFO: Pod "downwardapi-volume-ceca5301-4a6c-49aa-857d-b20632463669": Phase="Pending", Reason="", readiness=false. Elapsed: 10.090270477s Jan 26 13:04:53.898: INFO: Pod "downwardapi-volume-ceca5301-4a6c-49aa-857d-b20632463669": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.101033403s STEP: Saw pod success Jan 26 13:04:53.898: INFO: Pod "downwardapi-volume-ceca5301-4a6c-49aa-857d-b20632463669" satisfied condition "success or failure" Jan 26 13:04:53.906: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ceca5301-4a6c-49aa-857d-b20632463669 container client-container: STEP: delete the pod Jan 26 13:04:54.047: INFO: Waiting for pod downwardapi-volume-ceca5301-4a6c-49aa-857d-b20632463669 to disappear Jan 26 13:04:54.174: INFO: Pod downwardapi-volume-ceca5301-4a6c-49aa-857d-b20632463669 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:04:54.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1855" for this suite. Jan 26 13:05:00.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:05:00.359: INFO: namespace projected-1855 deletion completed in 6.178730281s • [SLOW TEST:18.713 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:05:00.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jan 26 13:05:00.563: INFO: Waiting up to 5m0s for pod "var-expansion-5729a871-bc26-4888-ba09-d7b0bfb976a4" in namespace "var-expansion-1966" to be "success or failure" Jan 26 13:05:00.600: INFO: Pod "var-expansion-5729a871-bc26-4888-ba09-d7b0bfb976a4": Phase="Pending", Reason="", readiness=false. Elapsed: 36.983374ms Jan 26 13:05:02.615: INFO: Pod "var-expansion-5729a871-bc26-4888-ba09-d7b0bfb976a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052126762s Jan 26 13:05:04.633: INFO: Pod "var-expansion-5729a871-bc26-4888-ba09-d7b0bfb976a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069919957s Jan 26 13:05:06.647: INFO: Pod "var-expansion-5729a871-bc26-4888-ba09-d7b0bfb976a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083833944s Jan 26 13:05:08.655: INFO: Pod "var-expansion-5729a871-bc26-4888-ba09-d7b0bfb976a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09250536s Jan 26 13:05:10.671: INFO: Pod "var-expansion-5729a871-bc26-4888-ba09-d7b0bfb976a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.107873085s STEP: Saw pod success Jan 26 13:05:10.671: INFO: Pod "var-expansion-5729a871-bc26-4888-ba09-d7b0bfb976a4" satisfied condition "success or failure" Jan 26 13:05:10.675: INFO: Trying to get logs from node iruya-node pod var-expansion-5729a871-bc26-4888-ba09-d7b0bfb976a4 container dapi-container: STEP: delete the pod Jan 26 13:05:10.849: INFO: Waiting for pod var-expansion-5729a871-bc26-4888-ba09-d7b0bfb976a4 to disappear Jan 26 13:05:10.875: INFO: Pod var-expansion-5729a871-bc26-4888-ba09-d7b0bfb976a4 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:05:10.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1966" for this suite. Jan 26 13:05:16.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:05:17.057: INFO: namespace var-expansion-1966 deletion completed in 6.166447844s • [SLOW TEST:16.697 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:05:17.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 26 13:05:17.257: INFO: Waiting up to 5m0s for pod "pod-639b9ad1-200b-469b-844f-f823cf0b6e91" in namespace "emptydir-7285" to be "success or failure" Jan 26 13:05:17.271: INFO: Pod "pod-639b9ad1-200b-469b-844f-f823cf0b6e91": Phase="Pending", Reason="", readiness=false. Elapsed: 14.233591ms Jan 26 13:05:19.298: INFO: Pod "pod-639b9ad1-200b-469b-844f-f823cf0b6e91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040701269s Jan 26 13:05:21.305: INFO: Pod "pod-639b9ad1-200b-469b-844f-f823cf0b6e91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047763218s Jan 26 13:05:23.314: INFO: Pod "pod-639b9ad1-200b-469b-844f-f823cf0b6e91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057076356s Jan 26 13:05:25.371: INFO: Pod "pod-639b9ad1-200b-469b-844f-f823cf0b6e91": Phase="Running", Reason="", readiness=true. Elapsed: 8.114241077s Jan 26 13:05:27.381: INFO: Pod "pod-639b9ad1-200b-469b-844f-f823cf0b6e91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.12358627s STEP: Saw pod success Jan 26 13:05:27.381: INFO: Pod "pod-639b9ad1-200b-469b-844f-f823cf0b6e91" satisfied condition "success or failure" Jan 26 13:05:27.385: INFO: Trying to get logs from node iruya-node pod pod-639b9ad1-200b-469b-844f-f823cf0b6e91 container test-container: STEP: delete the pod Jan 26 13:05:27.741: INFO: Waiting for pod pod-639b9ad1-200b-469b-844f-f823cf0b6e91 to disappear Jan 26 13:05:27.750: INFO: Pod pod-639b9ad1-200b-469b-844f-f823cf0b6e91 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:05:27.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7285" for this suite. Jan 26 13:05:33.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:05:33.965: INFO: namespace emptydir-7285 deletion completed in 6.20685573s • [SLOW TEST:16.908 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:05:33.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-2521 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2521 to expose endpoints map[] Jan 26 13:05:34.208: INFO: Get endpoints failed (10.208136ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 26 13:05:35.219: INFO: successfully validated that service multi-endpoint-test in namespace services-2521 exposes endpoints map[] (1.020932622s elapsed) STEP: Creating pod pod1 in namespace services-2521 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2521 to expose endpoints map[pod1:[100]] Jan 26 13:05:39.337: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.100449987s elapsed, will retry) Jan 26 13:05:43.410: INFO: successfully validated that service multi-endpoint-test in namespace services-2521 exposes endpoints map[pod1:[100]] (8.173233495s elapsed) STEP: Creating pod pod2 in namespace services-2521 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2521 to expose endpoints map[pod1:[100] pod2:[101]] Jan 26 13:05:48.594: INFO: Unexpected endpoints: found map[216b8480-cedb-4786-94a0-a5d1a5cb832b:[100]], expected map[pod1:[100] pod2:[101]] (5.162342203s elapsed, will retry) Jan 26 13:05:50.650: INFO: successfully validated that service multi-endpoint-test in namespace services-2521 exposes endpoints map[pod1:[100] pod2:[101]] (7.218535391s elapsed) STEP: Deleting pod pod1 in namespace services-2521 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2521 to expose endpoints map[pod2:[101]] Jan 26 13:05:51.708: INFO: successfully validated that service multi-endpoint-test in namespace services-2521 exposes endpoints map[pod2:[101]] (1.047812194s elapsed) STEP: Deleting pod pod2 in namespace services-2521 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2521 to expose endpoints map[] Jan 26 13:05:53.237: INFO: successfully validated that service multi-endpoint-test in namespace services-2521 exposes endpoints map[] (1.522787488s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:05:53.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2521" for this suite. Jan 26 13:06:15.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:06:15.843: INFO: namespace services-2521 deletion completed in 22.26864891s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:41.877 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:06:15.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-99529450-4d15-431d-b735-3033d1b63f64 in namespace container-probe-4751 Jan 26 13:06:24.163: INFO: Started pod busybox-99529450-4d15-431d-b735-3033d1b63f64 in namespace container-probe-4751 STEP: checking the pod's current state and verifying that restartCount is present Jan 26 13:06:24.166: INFO: Initial restart count of pod busybox-99529450-4d15-431d-b735-3033d1b63f64 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:10:25.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4751" for this suite. Jan 26 13:10:31.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:10:32.106: INFO: namespace container-probe-4751 deletion completed in 6.160964269s • [SLOW TEST:256.263 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:10:32.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Jan 26 13:10:32.305: INFO: Waiting up to 5m0s for pod "var-expansion-630d0a1a-34e3-47d7-924d-fc0d64880826" in namespace "var-expansion-5799" to be "success or failure" Jan 26 13:10:32.356: INFO: Pod "var-expansion-630d0a1a-34e3-47d7-924d-fc0d64880826": Phase="Pending", Reason="", readiness=false. Elapsed: 50.602483ms Jan 26 13:10:34.366: INFO: Pod "var-expansion-630d0a1a-34e3-47d7-924d-fc0d64880826": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060661878s Jan 26 13:10:36.376: INFO: Pod "var-expansion-630d0a1a-34e3-47d7-924d-fc0d64880826": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070577983s Jan 26 13:10:38.387: INFO: Pod "var-expansion-630d0a1a-34e3-47d7-924d-fc0d64880826": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081524074s Jan 26 13:10:40.400: INFO: Pod "var-expansion-630d0a1a-34e3-47d7-924d-fc0d64880826": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094940052s STEP: Saw pod success Jan 26 13:10:40.401: INFO: Pod "var-expansion-630d0a1a-34e3-47d7-924d-fc0d64880826" satisfied condition "success or failure" Jan 26 13:10:40.407: INFO: Trying to get logs from node iruya-node pod var-expansion-630d0a1a-34e3-47d7-924d-fc0d64880826 container dapi-container: STEP: delete the pod Jan 26 13:10:40.473: INFO: Waiting for pod var-expansion-630d0a1a-34e3-47d7-924d-fc0d64880826 to disappear Jan 26 13:10:40.482: INFO: Pod var-expansion-630d0a1a-34e3-47d7-924d-fc0d64880826 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:10:40.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5799" for this suite. Jan 26 13:10:46.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:10:46.647: INFO: namespace var-expansion-5799 deletion completed in 6.157573892s • [SLOW TEST:14.540 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:10:46.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 26 13:10:46.746: INFO: Waiting up to 5m0s for pod "pod-26aa9ffd-4090-4af2-b592-a80329a93133" in namespace "emptydir-6215" to be "success or failure" Jan 26 13:10:46.752: INFO: Pod "pod-26aa9ffd-4090-4af2-b592-a80329a93133": Phase="Pending", Reason="", readiness=false. Elapsed: 5.274726ms Jan 26 13:10:48.760: INFO: Pod "pod-26aa9ffd-4090-4af2-b592-a80329a93133": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013646292s Jan 26 13:10:50.775: INFO: Pod "pod-26aa9ffd-4090-4af2-b592-a80329a93133": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028225546s Jan 26 13:10:52.786: INFO: Pod "pod-26aa9ffd-4090-4af2-b592-a80329a93133": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039598268s Jan 26 13:10:54.804: INFO: Pod "pod-26aa9ffd-4090-4af2-b592-a80329a93133": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057893126s STEP: Saw pod success Jan 26 13:10:54.805: INFO: Pod "pod-26aa9ffd-4090-4af2-b592-a80329a93133" satisfied condition "success or failure" Jan 26 13:10:54.812: INFO: Trying to get logs from node iruya-node pod pod-26aa9ffd-4090-4af2-b592-a80329a93133 container test-container: STEP: delete the pod Jan 26 13:10:54.922: INFO: Waiting for pod pod-26aa9ffd-4090-4af2-b592-a80329a93133 to disappear Jan 26 13:10:54.929: INFO: Pod pod-26aa9ffd-4090-4af2-b592-a80329a93133 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:10:54.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6215" for this suite. Jan 26 13:11:00.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:11:01.095: INFO: namespace emptydir-6215 deletion completed in 6.159040354s • [SLOW TEST:14.447 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:11:01.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-416.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-416.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 26 13:11:15.335: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-416/dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72: the server could not find the requested resource (get pods dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72) Jan 26 13:11:15.343: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-416/dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72: the server could not find the requested resource (get pods dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72) Jan 26 13:11:15.352: INFO: Unable to read wheezy_udp@PodARecord from pod dns-416/dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72: the server could not find the requested resource (get pods dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72) Jan 26 13:11:15.359: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-416/dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72: the server could not find the requested resource (get pods dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72) Jan 26 13:11:15.365: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-416/dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72: the server could not find the requested resource (get pods dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72) Jan 26 13:11:15.375: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-416/dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72: the server could not find the requested resource (get pods dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72) Jan 26 13:11:15.387: INFO: Unable to read jessie_udp@PodARecord from pod dns-416/dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72: the server could not find the requested resource (get pods dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72) Jan 26 13:11:15.397: INFO: Unable to read jessie_tcp@PodARecord from pod dns-416/dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72: the server could not find the requested resource (get pods dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72) Jan 26 13:11:15.397: INFO: Lookups using dns-416/dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 26 13:11:20.475: INFO: DNS probes using dns-416/dns-test-59f8a2a0-8c67-4556-97a5-826fb5686a72 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:11:20.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-416" for this suite. Jan 26 13:11:26.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:11:26.806: INFO: namespace dns-416 deletion completed in 6.189912637s • [SLOW TEST:25.711 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:11:26.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-0a87cbfe-957a-43b4-83c6-c0e7ca3c1e6a STEP: Creating a pod to test consume secrets Jan 26 13:11:26.994: INFO: Waiting up to 5m0s for pod "pod-secrets-b323bdf8-f63b-4bf1-8585-be1bad4a4af0" in namespace "secrets-808" to be "success or failure" Jan 26 13:11:27.004: INFO: Pod "pod-secrets-b323bdf8-f63b-4bf1-8585-be1bad4a4af0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.421562ms Jan 26 13:11:29.011: INFO: Pod "pod-secrets-b323bdf8-f63b-4bf1-8585-be1bad4a4af0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016569941s Jan 26 13:11:31.020: INFO: Pod "pod-secrets-b323bdf8-f63b-4bf1-8585-be1bad4a4af0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026031127s Jan 26 13:11:33.028: INFO: Pod "pod-secrets-b323bdf8-f63b-4bf1-8585-be1bad4a4af0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034220597s Jan 26 13:11:35.045: INFO: Pod "pod-secrets-b323bdf8-f63b-4bf1-8585-be1bad4a4af0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050649597s STEP: Saw pod success Jan 26 13:11:35.045: INFO: Pod "pod-secrets-b323bdf8-f63b-4bf1-8585-be1bad4a4af0" satisfied condition "success or failure" Jan 26 13:11:35.049: INFO: Trying to get logs from node iruya-node pod pod-secrets-b323bdf8-f63b-4bf1-8585-be1bad4a4af0 container secret-volume-test: STEP: delete the pod Jan 26 13:11:35.269: INFO: Waiting for pod pod-secrets-b323bdf8-f63b-4bf1-8585-be1bad4a4af0 to disappear Jan 26 13:11:35.280: INFO: Pod pod-secrets-b323bdf8-f63b-4bf1-8585-be1bad4a4af0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:11:35.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-808" for this suite. Jan 26 13:11:41.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:11:41.454: INFO: namespace secrets-808 deletion completed in 6.167228413s STEP: Destroying namespace "secret-namespace-1470" for this suite. Jan 26 13:11:47.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:11:47.619: INFO: namespace secret-namespace-1470 deletion completed in 6.164863645s • [SLOW TEST:20.811 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:11:47.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 26 13:11:56.877: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:11:56.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7822" for this suite. Jan 26 13:12:03.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:12:03.117: INFO: namespace container-runtime-7822 deletion completed in 6.128385811s • [SLOW TEST:15.497 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:12:03.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 26 13:12:03.419: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ead1507e-4f3c-4b52-8aa7-50fbe6ecd75b", Controller:(*bool)(0xc002c9eb8a), BlockOwnerDeletion:(*bool)(0xc002c9eb8b)}} Jan 26 13:12:03.437: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"1027be6a-53c1-48b9-8310-43d01155b995", Controller:(*bool)(0xc002c9ed4a), BlockOwnerDeletion:(*bool)(0xc002c9ed4b)}} Jan 26 13:12:03.520: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ab583269-9573-48a9-8633-c0e8c4a4e01c", Controller:(*bool)(0xc002c9ef12), BlockOwnerDeletion:(*bool)(0xc002c9ef13)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:12:08.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4141" for this suite. Jan 26 13:12:14.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:12:14.911: INFO: namespace gc-4141 deletion completed in 6.276373582s • [SLOW TEST:11.793 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:12:14.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 26 13:12:15.021: INFO: Waiting up to 5m0s for pod "downward-api-9a2dfc94-36a1-4185-92c1-cfb89c934ffa" in namespace "downward-api-9561" to be "success or failure" Jan 26 13:12:15.081: INFO: Pod "downward-api-9a2dfc94-36a1-4185-92c1-cfb89c934ffa": Phase="Pending", Reason="", readiness=false. Elapsed: 60.231428ms Jan 26 13:12:17.092: INFO: Pod "downward-api-9a2dfc94-36a1-4185-92c1-cfb89c934ffa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07127686s Jan 26 13:12:19.105: INFO: Pod "downward-api-9a2dfc94-36a1-4185-92c1-cfb89c934ffa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08381287s Jan 26 13:12:21.119: INFO: Pod "downward-api-9a2dfc94-36a1-4185-92c1-cfb89c934ffa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097566592s Jan 26 13:12:23.126: INFO: Pod "downward-api-9a2dfc94-36a1-4185-92c1-cfb89c934ffa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104963345s STEP: Saw pod success Jan 26 13:12:23.126: INFO: Pod "downward-api-9a2dfc94-36a1-4185-92c1-cfb89c934ffa" satisfied condition "success or failure" Jan 26 13:12:23.131: INFO: Trying to get logs from node iruya-node pod downward-api-9a2dfc94-36a1-4185-92c1-cfb89c934ffa container dapi-container: STEP: delete the pod Jan 26 13:12:23.235: INFO: Waiting for pod downward-api-9a2dfc94-36a1-4185-92c1-cfb89c934ffa to disappear Jan 26 13:12:23.241: INFO: Pod downward-api-9a2dfc94-36a1-4185-92c1-cfb89c934ffa no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:12:23.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9561" for this suite. Jan 26 13:12:29.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:12:29.486: INFO: namespace downward-api-9561 deletion completed in 6.239702873s • [SLOW TEST:14.574 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:12:29.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5388 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 26 13:12:29.630: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 26 13:13:07.832: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-5388 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 13:13:07.833: INFO: >>> kubeConfig: /root/.kube/config I0126 13:13:07.900511 8 log.go:172] (0xc0023773f0) (0xc0026f4500) Create stream I0126 13:13:07.900616 8 log.go:172] (0xc0023773f0) (0xc0026f4500) Stream added, broadcasting: 1 I0126 13:13:07.905938 8 log.go:172] (0xc0023773f0) Reply frame received for 1 I0126 13:13:07.905971 8 log.go:172] (0xc0023773f0) (0xc0029d1400) Create stream I0126 13:13:07.905980 8 log.go:172] (0xc0023773f0) (0xc0029d1400) Stream added, broadcasting: 3 I0126 13:13:07.907616 8 log.go:172] (0xc0023773f0) Reply frame received for 3 I0126 13:13:07.907636 8 log.go:172] (0xc0023773f0) (0xc0029d14a0) Create stream I0126 13:13:07.907643 8 log.go:172] (0xc0023773f0) (0xc0029d14a0) Stream added, broadcasting: 5 I0126 13:13:07.909012 8 log.go:172] (0xc0023773f0) Reply frame received for 5 I0126 13:13:08.164461 8 log.go:172] (0xc0023773f0) Data frame received for 3 I0126 13:13:08.164662 8 log.go:172] (0xc0029d1400) (3) Data frame handling I0126 13:13:08.164721 8 log.go:172] (0xc0029d1400) (3) Data frame sent I0126 13:13:08.303044 8 log.go:172] (0xc0023773f0) Data frame received for 1 I0126 13:13:08.303146 8 log.go:172] (0xc0023773f0) (0xc0029d1400) Stream removed, broadcasting: 3 I0126 13:13:08.303255 8 log.go:172] (0xc0026f4500) (1) Data frame handling I0126 13:13:08.303308 8 log.go:172] (0xc0026f4500) (1) Data frame sent I0126 13:13:08.303341 8 log.go:172] (0xc0023773f0) (0xc0029d14a0) Stream removed, broadcasting: 5 I0126 13:13:08.303373 8 log.go:172] (0xc0023773f0) (0xc0026f4500) Stream removed, broadcasting: 1 I0126 13:13:08.303419 8 log.go:172] (0xc0023773f0) Go away received I0126 13:13:08.303706 8 log.go:172] (0xc0023773f0) (0xc0026f4500) Stream removed, broadcasting: 1 I0126 13:13:08.303746 8 log.go:172] (0xc0023773f0) (0xc0029d1400) Stream removed, broadcasting: 3 I0126 13:13:08.303760 8 log.go:172] (0xc0023773f0) (0xc0029d14a0) Stream removed, broadcasting: 5 Jan 26 13:13:08.303: INFO: Waiting for endpoints: map[] Jan 26 13:13:08.881: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-5388 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 13:13:08.881: INFO: >>> kubeConfig: /root/.kube/config I0126 13:13:08.979194 8 log.go:172] (0xc000c23340) (0xc0028a46e0) Create stream I0126 13:13:08.979356 8 log.go:172] (0xc000c23340) (0xc0028a46e0) Stream added, broadcasting: 1 I0126 13:13:08.992748 8 log.go:172] (0xc000c23340) Reply frame received for 1 I0126 13:13:08.992821 8 log.go:172] (0xc000c23340) (0xc0026ab5e0) Create stream I0126 13:13:08.992838 8 log.go:172] (0xc000c23340) (0xc0026ab5e0) Stream added, broadcasting: 3 I0126 13:13:08.996317 8 log.go:172] (0xc000c23340) Reply frame received for 3 I0126 13:13:08.996440 8 log.go:172] (0xc000c23340) (0xc00221bf40) Create stream I0126 13:13:08.996457 8 log.go:172] (0xc000c23340) (0xc00221bf40) Stream added, broadcasting: 5 I0126 13:13:09.002398 8 log.go:172] (0xc000c23340) Reply frame received for 5 I0126 13:13:09.145475 8 log.go:172] (0xc000c23340) Data frame received for 3 I0126 13:13:09.145516 8 log.go:172] (0xc0026ab5e0) (3) Data frame handling I0126 13:13:09.145534 8 log.go:172] (0xc0026ab5e0) (3) Data frame sent I0126 13:13:09.269567 8 log.go:172] (0xc000c23340) Data frame received for 1 I0126 13:13:09.269748 8 log.go:172] (0xc000c23340) (0xc0026ab5e0) Stream removed, broadcasting: 3 I0126 13:13:09.269856 8 log.go:172] (0xc0028a46e0) (1) Data frame handling I0126 13:13:09.269903 8 log.go:172] (0xc000c23340) (0xc00221bf40) Stream removed, broadcasting: 5 I0126 13:13:09.269948 8 log.go:172] (0xc0028a46e0) (1) Data frame sent I0126 13:13:09.269996 8 log.go:172] (0xc000c23340) (0xc0028a46e0) Stream removed, broadcasting: 1 I0126 13:13:09.270028 8 log.go:172] (0xc000c23340) Go away received I0126 13:13:09.270433 8 log.go:172] (0xc000c23340) (0xc0028a46e0) Stream removed, broadcasting: 1 I0126 13:13:09.270466 8 log.go:172] (0xc000c23340) (0xc0026ab5e0) Stream removed, broadcasting: 3 I0126 13:13:09.270482 8 log.go:172] (0xc000c23340) (0xc00221bf40) Stream removed, broadcasting: 5 Jan 26 13:13:09.270: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:13:09.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5388" for this suite. Jan 26 13:13:21.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:13:21.456: INFO: namespace pod-network-test-5388 deletion completed in 12.166864731s • [SLOW TEST:51.968 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:13:21.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0126 13:13:51.897939 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 26 13:13:51.898: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:13:51.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-179" for this suite. Jan 26 13:13:57.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:13:58.021: INFO: namespace gc-179 deletion completed in 6.117432889s • [SLOW TEST:36.562 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:13:58.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-wlrn STEP: Creating a pod to test atomic-volume-subpath Jan 26 13:13:59.693: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wlrn" in namespace "subpath-9017" to be "success or failure" Jan 26 13:13:59.733: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Pending", Reason="", readiness=false. Elapsed: 39.926161ms Jan 26 13:14:01.749: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056011271s Jan 26 13:14:03.758: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06531923s Jan 26 13:14:05.792: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099299018s Jan 26 13:14:07.811: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Running", Reason="", readiness=true. Elapsed: 8.117663049s Jan 26 13:14:09.861: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Running", Reason="", readiness=true. Elapsed: 10.168166429s Jan 26 13:14:11.876: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Running", Reason="", readiness=true. Elapsed: 12.183026538s Jan 26 13:14:13.887: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Running", Reason="", readiness=true. Elapsed: 14.193701645s Jan 26 13:14:15.898: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Running", Reason="", readiness=true. Elapsed: 16.204916883s Jan 26 13:14:17.922: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Running", Reason="", readiness=true. Elapsed: 18.22902262s Jan 26 13:14:19.930: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Running", Reason="", readiness=true. Elapsed: 20.236808204s Jan 26 13:14:21.940: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Running", Reason="", readiness=true. Elapsed: 22.247145656s Jan 26 13:14:23.953: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Running", Reason="", readiness=true. Elapsed: 24.260140812s Jan 26 13:14:25.964: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Running", Reason="", readiness=true. Elapsed: 26.270566705s Jan 26 13:14:27.976: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Running", Reason="", readiness=true. Elapsed: 28.282555401s Jan 26 13:14:29.983: INFO: Pod "pod-subpath-test-configmap-wlrn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.290330418s STEP: Saw pod success Jan 26 13:14:29.983: INFO: Pod "pod-subpath-test-configmap-wlrn" satisfied condition "success or failure" Jan 26 13:14:29.987: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-wlrn container test-container-subpath-configmap-wlrn: STEP: delete the pod Jan 26 13:14:30.109: INFO: Waiting for pod pod-subpath-test-configmap-wlrn to disappear Jan 26 13:14:30.121: INFO: Pod pod-subpath-test-configmap-wlrn no longer exists STEP: Deleting pod pod-subpath-test-configmap-wlrn Jan 26 13:14:30.121: INFO: Deleting pod "pod-subpath-test-configmap-wlrn" in namespace "subpath-9017" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:14:30.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9017" for this suite. Jan 26 13:14:36.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:14:36.356: INFO: namespace subpath-9017 deletion completed in 6.223500679s • [SLOW TEST:38.335 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:14:36.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e5cb06e5-986e-4e7a-93de-2800fe5e601d STEP: Creating a pod to test consume secrets Jan 26 13:14:36.484: INFO: Waiting up to 5m0s for pod "pod-secrets-13b20fdf-87bd-4dd2-8c71-21864f54ce67" in namespace "secrets-5035" to be "success or failure" Jan 26 13:14:36.489: INFO: Pod "pod-secrets-13b20fdf-87bd-4dd2-8c71-21864f54ce67": Phase="Pending", Reason="", readiness=false. Elapsed: 5.148194ms Jan 26 13:14:38.507: INFO: Pod "pod-secrets-13b20fdf-87bd-4dd2-8c71-21864f54ce67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023134172s Jan 26 13:14:40.521: INFO: Pod "pod-secrets-13b20fdf-87bd-4dd2-8c71-21864f54ce67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037233451s Jan 26 13:14:42.538: INFO: Pod "pod-secrets-13b20fdf-87bd-4dd2-8c71-21864f54ce67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05466838s Jan 26 13:14:44.548: INFO: Pod "pod-secrets-13b20fdf-87bd-4dd2-8c71-21864f54ce67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06422829s STEP: Saw pod success Jan 26 13:14:44.548: INFO: Pod "pod-secrets-13b20fdf-87bd-4dd2-8c71-21864f54ce67" satisfied condition "success or failure" Jan 26 13:14:44.551: INFO: Trying to get logs from node iruya-node pod pod-secrets-13b20fdf-87bd-4dd2-8c71-21864f54ce67 container secret-volume-test: STEP: delete the pod Jan 26 13:14:44.648: INFO: Waiting for pod pod-secrets-13b20fdf-87bd-4dd2-8c71-21864f54ce67 to disappear Jan 26 13:14:44.653: INFO: Pod pod-secrets-13b20fdf-87bd-4dd2-8c71-21864f54ce67 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:14:44.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5035" for this suite. Jan 26 13:14:50.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:14:50.827: INFO: namespace secrets-5035 deletion completed in 6.166198036s • [SLOW TEST:14.471 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:14:50.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 26 13:14:50.955: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bed9cfa4-ce1d-4440-974a-b4cbd05b3f42" in namespace "projected-8629" to be "success or failure" Jan 26 13:14:50.985: INFO: Pod "downwardapi-volume-bed9cfa4-ce1d-4440-974a-b4cbd05b3f42": Phase="Pending", Reason="", readiness=false. Elapsed: 30.048447ms Jan 26 13:14:52.995: INFO: Pod "downwardapi-volume-bed9cfa4-ce1d-4440-974a-b4cbd05b3f42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03989733s Jan 26 13:14:55.011: INFO: Pod "downwardapi-volume-bed9cfa4-ce1d-4440-974a-b4cbd05b3f42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055592227s Jan 26 13:14:57.028: INFO: Pod "downwardapi-volume-bed9cfa4-ce1d-4440-974a-b4cbd05b3f42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073109003s Jan 26 13:14:59.039: INFO: Pod "downwardapi-volume-bed9cfa4-ce1d-4440-974a-b4cbd05b3f42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083493495s STEP: Saw pod success Jan 26 13:14:59.039: INFO: Pod "downwardapi-volume-bed9cfa4-ce1d-4440-974a-b4cbd05b3f42" satisfied condition "success or failure" Jan 26 13:14:59.044: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bed9cfa4-ce1d-4440-974a-b4cbd05b3f42 container client-container: STEP: delete the pod Jan 26 13:14:59.352: INFO: Waiting for pod downwardapi-volume-bed9cfa4-ce1d-4440-974a-b4cbd05b3f42 to disappear Jan 26 13:14:59.368: INFO: Pod downwardapi-volume-bed9cfa4-ce1d-4440-974a-b4cbd05b3f42 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:14:59.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8629" for this suite. Jan 26 13:15:05.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:15:05.489: INFO: namespace projected-8629 deletion completed in 6.113725047s • [SLOW TEST:14.661 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:15:05.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 26 13:15:05.545: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 26 13:15:08.460: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:15:09.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8472" for this suite. Jan 26 13:15:15.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:15:17.754: INFO: namespace replication-controller-8472 deletion completed in 8.269781634s • [SLOW TEST:12.264 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:15:17.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-89572f84-139e-49a2-ac33-c62c948b4212 STEP: Creating a pod to test consume configMaps Jan 26 13:15:17.991: INFO: Waiting up to 5m0s for pod "pod-configmaps-5072ce6e-1ad0-4926-a645-b83131c1ec86" in namespace "configmap-1793" to be "success or failure" Jan 26 13:15:18.102: INFO: Pod "pod-configmaps-5072ce6e-1ad0-4926-a645-b83131c1ec86": Phase="Pending", Reason="", readiness=false. Elapsed: 110.626469ms Jan 26 13:15:20.109: INFO: Pod "pod-configmaps-5072ce6e-1ad0-4926-a645-b83131c1ec86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117758033s Jan 26 13:15:22.118: INFO: Pod "pod-configmaps-5072ce6e-1ad0-4926-a645-b83131c1ec86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126384084s Jan 26 13:15:24.129: INFO: Pod "pod-configmaps-5072ce6e-1ad0-4926-a645-b83131c1ec86": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137927213s Jan 26 13:15:26.140: INFO: Pod "pod-configmaps-5072ce6e-1ad0-4926-a645-b83131c1ec86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.148825672s STEP: Saw pod success Jan 26 13:15:26.140: INFO: Pod "pod-configmaps-5072ce6e-1ad0-4926-a645-b83131c1ec86" satisfied condition "success or failure" Jan 26 13:15:26.145: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5072ce6e-1ad0-4926-a645-b83131c1ec86 container configmap-volume-test: STEP: delete the pod Jan 26 13:15:26.368: INFO: Waiting for pod pod-configmaps-5072ce6e-1ad0-4926-a645-b83131c1ec86 to disappear Jan 26 13:15:26.375: INFO: Pod pod-configmaps-5072ce6e-1ad0-4926-a645-b83131c1ec86 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:15:26.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1793" for this suite. Jan 26 13:15:32.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:15:32.584: INFO: namespace configmap-1793 deletion completed in 6.20151177s • [SLOW TEST:14.829 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:15:32.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-1263/configmap-test-252448a0-feb3-4de6-b319-a009f497d810 STEP: Creating a pod to test consume configMaps Jan 26 13:15:32.759: INFO: Waiting up to 5m0s for pod "pod-configmaps-17a93ab5-aeaf-4244-988f-e36206683646" in namespace "configmap-1263" to be "success or failure" Jan 26 13:15:32.777: INFO: Pod "pod-configmaps-17a93ab5-aeaf-4244-988f-e36206683646": Phase="Pending", Reason="", readiness=false. Elapsed: 18.424293ms Jan 26 13:15:34.784: INFO: Pod "pod-configmaps-17a93ab5-aeaf-4244-988f-e36206683646": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025240097s Jan 26 13:15:36.789: INFO: Pod "pod-configmaps-17a93ab5-aeaf-4244-988f-e36206683646": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030702633s Jan 26 13:15:38.799: INFO: Pod "pod-configmaps-17a93ab5-aeaf-4244-988f-e36206683646": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040469872s Jan 26 13:15:40.816: INFO: Pod "pod-configmaps-17a93ab5-aeaf-4244-988f-e36206683646": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05725762s STEP: Saw pod success Jan 26 13:15:40.816: INFO: Pod "pod-configmaps-17a93ab5-aeaf-4244-988f-e36206683646" satisfied condition "success or failure" Jan 26 13:15:40.823: INFO: Trying to get logs from node iruya-node pod pod-configmaps-17a93ab5-aeaf-4244-988f-e36206683646 container env-test: STEP: delete the pod Jan 26 13:15:40.926: INFO: Waiting for pod pod-configmaps-17a93ab5-aeaf-4244-988f-e36206683646 to disappear Jan 26 13:15:40.934: INFO: Pod pod-configmaps-17a93ab5-aeaf-4244-988f-e36206683646 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:15:40.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1263" for this suite. Jan 26 13:15:46.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:15:47.122: INFO: namespace configmap-1263 deletion completed in 6.180500583s • [SLOW TEST:14.538 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:15:47.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 26 13:15:47.218: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6106,SelfLink:/api/v1/namespaces/watch-6106/configmaps/e2e-watch-test-configmap-a,UID:494ff6a2-27f0-40ac-9503-0d2928f85e07,ResourceVersion:21934700,Generation:0,CreationTimestamp:2020-01-26 13:15:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 26 13:15:47.218: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6106,SelfLink:/api/v1/namespaces/watch-6106/configmaps/e2e-watch-test-configmap-a,UID:494ff6a2-27f0-40ac-9503-0d2928f85e07,ResourceVersion:21934700,Generation:0,CreationTimestamp:2020-01-26 13:15:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 26 13:15:57.235: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6106,SelfLink:/api/v1/namespaces/watch-6106/configmaps/e2e-watch-test-configmap-a,UID:494ff6a2-27f0-40ac-9503-0d2928f85e07,ResourceVersion:21934715,Generation:0,CreationTimestamp:2020-01-26 13:15:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 26 13:15:57.235: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6106,SelfLink:/api/v1/namespaces/watch-6106/configmaps/e2e-watch-test-configmap-a,UID:494ff6a2-27f0-40ac-9503-0d2928f85e07,ResourceVersion:21934715,Generation:0,CreationTimestamp:2020-01-26 13:15:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 26 13:16:07.249: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6106,SelfLink:/api/v1/namespaces/watch-6106/configmaps/e2e-watch-test-configmap-a,UID:494ff6a2-27f0-40ac-9503-0d2928f85e07,ResourceVersion:21934729,Generation:0,CreationTimestamp:2020-01-26 13:15:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 26 13:16:07.250: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6106,SelfLink:/api/v1/namespaces/watch-6106/configmaps/e2e-watch-test-configmap-a,UID:494ff6a2-27f0-40ac-9503-0d2928f85e07,ResourceVersion:21934729,Generation:0,CreationTimestamp:2020-01-26 13:15:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 26 13:16:17.272: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6106,SelfLink:/api/v1/namespaces/watch-6106/configmaps/e2e-watch-test-configmap-a,UID:494ff6a2-27f0-40ac-9503-0d2928f85e07,ResourceVersion:21934743,Generation:0,CreationTimestamp:2020-01-26 13:15:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 26 13:16:17.273: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6106,SelfLink:/api/v1/namespaces/watch-6106/configmaps/e2e-watch-test-configmap-a,UID:494ff6a2-27f0-40ac-9503-0d2928f85e07,ResourceVersion:21934743,Generation:0,CreationTimestamp:2020-01-26 13:15:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 26 13:16:27.288: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6106,SelfLink:/api/v1/namespaces/watch-6106/configmaps/e2e-watch-test-configmap-b,UID:3614b8ac-735b-4e00-a93b-17375e9c0556,ResourceVersion:21934758,Generation:0,CreationTimestamp:2020-01-26 13:16:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 26 13:16:27.288: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6106,SelfLink:/api/v1/namespaces/watch-6106/configmaps/e2e-watch-test-configmap-b,UID:3614b8ac-735b-4e00-a93b-17375e9c0556,ResourceVersion:21934758,Generation:0,CreationTimestamp:2020-01-26 13:16:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 26 13:16:37.302: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6106,SelfLink:/api/v1/namespaces/watch-6106/configmaps/e2e-watch-test-configmap-b,UID:3614b8ac-735b-4e00-a93b-17375e9c0556,ResourceVersion:21934772,Generation:0,CreationTimestamp:2020-01-26 13:16:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 26 13:16:37.303: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6106,SelfLink:/api/v1/namespaces/watch-6106/configmaps/e2e-watch-test-configmap-b,UID:3614b8ac-735b-4e00-a93b-17375e9c0556,ResourceVersion:21934772,Generation:0,CreationTimestamp:2020-01-26 13:16:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:16:47.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6106" for this suite. Jan 26 13:16:53.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:16:53.502: INFO: namespace watch-6106 deletion completed in 6.18648653s • [SLOW TEST:66.379 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:16:53.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jan 26 13:16:53.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4507 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 26 13:17:05.086: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0126 13:17:03.681909 259 log.go:172] (0xc0005ea580) (0xc000750fa0) Create stream\nI0126 13:17:03.682069 259 log.go:172] (0xc0005ea580) (0xc000750fa0) Stream added, broadcasting: 1\nI0126 13:17:03.691596 259 log.go:172] (0xc0005ea580) Reply frame received for 1\nI0126 13:17:03.691668 259 log.go:172] (0xc0005ea580) (0xc0005ae000) Create stream\nI0126 13:17:03.691684 259 log.go:172] (0xc0005ea580) (0xc0005ae000) Stream added, broadcasting: 3\nI0126 13:17:03.693606 259 log.go:172] (0xc0005ea580) Reply frame received for 3\nI0126 13:17:03.693658 259 log.go:172] (0xc0005ea580) (0xc0005d4000) Create stream\nI0126 13:17:03.693674 259 log.go:172] (0xc0005ea580) (0xc0005d4000) Stream added, broadcasting: 5\nI0126 13:17:03.696153 259 log.go:172] (0xc0005ea580) Reply frame received for 5\nI0126 13:17:03.696189 259 log.go:172] (0xc0005ea580) (0xc000751040) Create stream\nI0126 13:17:03.696202 259 log.go:172] (0xc0005ea580) (0xc000751040) Stream added, broadcasting: 7\nI0126 13:17:03.698467 259 log.go:172] (0xc0005ea580) Reply frame received for 7\nI0126 13:17:03.698908 259 log.go:172] (0xc0005ae000) (3) Writing data frame\nI0126 13:17:03.699154 259 log.go:172] (0xc0005ae000) (3) Writing data frame\nI0126 13:17:03.715431 259 log.go:172] (0xc0005ea580) Data frame received for 5\nI0126 13:17:03.715487 259 log.go:172] (0xc0005d4000) (5) Data frame handling\nI0126 13:17:03.715534 259 log.go:172] (0xc0005d4000) (5) Data frame sent\nI0126 13:17:03.720230 259 log.go:172] (0xc0005ea580) Data frame received for 5\nI0126 13:17:03.720259 259 log.go:172] (0xc0005d4000) (5) Data frame handling\nI0126 13:17:03.720285 259 log.go:172] (0xc0005d4000) (5) Data frame sent\nI0126 13:17:05.041198 259 log.go:172] (0xc0005ea580) Data frame received for 1\nI0126 13:17:05.041270 259 log.go:172] (0xc0005ea580) (0xc0005d4000) Stream removed, broadcasting: 5\nI0126 13:17:05.041307 259 log.go:172] (0xc000750fa0) (1) Data frame handling\nI0126 13:17:05.041324 259 log.go:172] (0xc000750fa0) (1) Data frame sent\nI0126 13:17:05.041410 259 log.go:172] (0xc0005ea580) (0xc000751040) Stream removed, broadcasting: 7\nI0126 13:17:05.041447 259 log.go:172] (0xc0005ea580) (0xc000750fa0) Stream removed, broadcasting: 1\nI0126 13:17:05.041467 259 log.go:172] (0xc0005ea580) (0xc0005ae000) Stream removed, broadcasting: 3\nI0126 13:17:05.041532 259 log.go:172] (0xc0005ea580) Go away received\nI0126 13:17:05.041582 259 log.go:172] (0xc0005ea580) (0xc000750fa0) Stream removed, broadcasting: 1\nI0126 13:17:05.041592 259 log.go:172] (0xc0005ea580) (0xc0005ae000) Stream removed, broadcasting: 3\nI0126 13:17:05.041597 259 log.go:172] (0xc0005ea580) (0xc0005d4000) Stream removed, broadcasting: 5\nI0126 13:17:05.041602 259 log.go:172] (0xc0005ea580) (0xc000751040) Stream removed, broadcasting: 7\n" Jan 26 13:17:05.086: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:17:07.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4507" for this suite. Jan 26 13:17:13.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:17:13.377: INFO: namespace kubectl-4507 deletion completed in 6.270097564s • [SLOW TEST:19.875 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:17:13.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 26 13:17:13.492: INFO: Waiting up to 5m0s for pod "pod-09d21874-0b30-45c2-addc-3a098b833bfb" in namespace "emptydir-7797" to be "success or failure" Jan 26 13:17:13.518: INFO: Pod "pod-09d21874-0b30-45c2-addc-3a098b833bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.107077ms Jan 26 13:17:15.526: INFO: Pod "pod-09d21874-0b30-45c2-addc-3a098b833bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034443239s Jan 26 13:17:17.536: INFO: Pod "pod-09d21874-0b30-45c2-addc-3a098b833bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044183127s Jan 26 13:17:19.548: INFO: Pod "pod-09d21874-0b30-45c2-addc-3a098b833bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0561116s Jan 26 13:17:21.555: INFO: Pod "pod-09d21874-0b30-45c2-addc-3a098b833bfb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06310933s Jan 26 13:17:23.566: INFO: Pod "pod-09d21874-0b30-45c2-addc-3a098b833bfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073653156s STEP: Saw pod success Jan 26 13:17:23.566: INFO: Pod "pod-09d21874-0b30-45c2-addc-3a098b833bfb" satisfied condition "success or failure" Jan 26 13:17:23.571: INFO: Trying to get logs from node iruya-node pod pod-09d21874-0b30-45c2-addc-3a098b833bfb container test-container: STEP: delete the pod Jan 26 13:17:23.686: INFO: Waiting for pod pod-09d21874-0b30-45c2-addc-3a098b833bfb to disappear Jan 26 13:17:23.696: INFO: Pod pod-09d21874-0b30-45c2-addc-3a098b833bfb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:17:23.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7797" for this suite. Jan 26 13:17:29.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:17:29.930: INFO: namespace emptydir-7797 deletion completed in 6.225834693s • [SLOW TEST:16.552 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:17:29.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-f080d7e2-24fa-4cb0-a48a-d1d87328ba34 STEP: Creating a pod to test consume configMaps Jan 26 13:17:30.046: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3406f68b-c98d-49e0-babe-58288f2c2402" in namespace "projected-3532" to be "success or failure" Jan 26 13:17:30.053: INFO: Pod "pod-projected-configmaps-3406f68b-c98d-49e0-babe-58288f2c2402": Phase="Pending", Reason="", readiness=false. Elapsed: 6.895388ms Jan 26 13:17:32.069: INFO: Pod "pod-projected-configmaps-3406f68b-c98d-49e0-babe-58288f2c2402": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022311617s Jan 26 13:17:34.078: INFO: Pod "pod-projected-configmaps-3406f68b-c98d-49e0-babe-58288f2c2402": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032095412s Jan 26 13:17:36.091: INFO: Pod "pod-projected-configmaps-3406f68b-c98d-49e0-babe-58288f2c2402": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0449004s Jan 26 13:17:38.100: INFO: Pod "pod-projected-configmaps-3406f68b-c98d-49e0-babe-58288f2c2402": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053553532s STEP: Saw pod success Jan 26 13:17:38.100: INFO: Pod "pod-projected-configmaps-3406f68b-c98d-49e0-babe-58288f2c2402" satisfied condition "success or failure" Jan 26 13:17:38.104: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-3406f68b-c98d-49e0-babe-58288f2c2402 container projected-configmap-volume-test: STEP: delete the pod Jan 26 13:17:38.457: INFO: Waiting for pod pod-projected-configmaps-3406f68b-c98d-49e0-babe-58288f2c2402 to disappear Jan 26 13:17:38.470: INFO: Pod pod-projected-configmaps-3406f68b-c98d-49e0-babe-58288f2c2402 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:17:38.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3532" for this suite. Jan 26 13:17:44.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:17:44.645: INFO: namespace projected-3532 deletion completed in 6.166973245s • [SLOW TEST:14.715 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:17:44.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jan 26 13:17:44.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5463' Jan 26 13:17:45.220: INFO: stderr: "" Jan 26 13:17:45.220: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jan 26 13:17:46.231: INFO: Selector matched 1 pods for map[app:redis] Jan 26 13:17:46.231: INFO: Found 0 / 1 Jan 26 13:17:47.233: INFO: Selector matched 1 pods for map[app:redis] Jan 26 13:17:47.234: INFO: Found 0 / 1 Jan 26 13:17:48.230: INFO: Selector matched 1 pods for map[app:redis] Jan 26 13:17:48.230: INFO: Found 0 / 1 Jan 26 13:17:49.232: INFO: Selector matched 1 pods for map[app:redis] Jan 26 13:17:49.232: INFO: Found 0 / 1 Jan 26 13:17:50.232: INFO: Selector matched 1 pods for map[app:redis] Jan 26 13:17:50.233: INFO: Found 0 / 1 Jan 26 13:17:51.229: INFO: Selector matched 1 pods for map[app:redis] Jan 26 13:17:51.229: INFO: Found 0 / 1 Jan 26 13:17:52.228: INFO: Selector matched 1 pods for map[app:redis] Jan 26 13:17:52.228: INFO: Found 1 / 1 Jan 26 13:17:52.228: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 26 13:17:52.233: INFO: Selector matched 1 pods for map[app:redis] Jan 26 13:17:52.234: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 26 13:17:52.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qs5bs redis-master --namespace=kubectl-5463' Jan 26 13:17:52.426: INFO: stderr: "" Jan 26 13:17:52.427: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 26 Jan 13:17:51.094 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Jan 13:17:51.094 # Server started, Redis version 3.2.12\n1:M 26 Jan 13:17:51.094 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Jan 13:17:51.094 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 26 13:17:52.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qs5bs redis-master --namespace=kubectl-5463 --tail=1' Jan 26 13:17:52.613: INFO: stderr: "" Jan 26 13:17:52.613: INFO: stdout: "1:M 26 Jan 13:17:51.094 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 26 13:17:52.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qs5bs redis-master --namespace=kubectl-5463 --limit-bytes=1' Jan 26 13:17:52.770: INFO: stderr: "" Jan 26 13:17:52.770: INFO: stdout: " " STEP: exposing timestamps Jan 26 13:17:52.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qs5bs redis-master --namespace=kubectl-5463 --tail=1 --timestamps' Jan 26 13:17:52.971: INFO: stderr: "" Jan 26 13:17:52.971: INFO: stdout: "2020-01-26T13:17:51.095874535Z 1:M 26 Jan 13:17:51.094 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 26 13:17:55.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qs5bs redis-master --namespace=kubectl-5463 --since=1s' Jan 26 13:17:55.701: INFO: stderr: "" Jan 26 13:17:55.702: INFO: stdout: "" Jan 26 13:17:55.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qs5bs redis-master --namespace=kubectl-5463 --since=24h' Jan 26 13:17:55.938: INFO: stderr: "" Jan 26 13:17:55.938: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 26 Jan 13:17:51.094 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Jan 13:17:51.094 # Server started, Redis version 3.2.12\n1:M 26 Jan 13:17:51.094 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Jan 13:17:51.094 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jan 26 13:17:55.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5463' Jan 26 13:17:56.072: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 26 13:17:56.072: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 26 13:17:56.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-5463' Jan 26 13:17:56.193: INFO: stderr: "No resources found.\n" Jan 26 13:17:56.193: INFO: stdout: "" Jan 26 13:17:56.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-5463 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 26 13:17:56.334: INFO: stderr: "" Jan 26 13:17:56.334: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:17:56.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5463" for this suite. Jan 26 13:18:18.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:18:18.465: INFO: namespace kubectl-5463 deletion completed in 22.122563508s • [SLOW TEST:33.820 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:18:18.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 26 13:18:18.571: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38da7528-2228-4678-ac1f-0bcf3bab993d" in namespace "downward-api-7667" to be "success or failure" Jan 26 13:18:18.619: INFO: Pod "downwardapi-volume-38da7528-2228-4678-ac1f-0bcf3bab993d": Phase="Pending", Reason="", readiness=false. Elapsed: 48.638899ms Jan 26 13:18:20.638: INFO: Pod "downwardapi-volume-38da7528-2228-4678-ac1f-0bcf3bab993d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067546618s Jan 26 13:18:22.646: INFO: Pod "downwardapi-volume-38da7528-2228-4678-ac1f-0bcf3bab993d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075006928s Jan 26 13:18:24.657: INFO: Pod "downwardapi-volume-38da7528-2228-4678-ac1f-0bcf3bab993d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086300162s Jan 26 13:18:26.664: INFO: Pod "downwardapi-volume-38da7528-2228-4678-ac1f-0bcf3bab993d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093019144s STEP: Saw pod success Jan 26 13:18:26.664: INFO: Pod "downwardapi-volume-38da7528-2228-4678-ac1f-0bcf3bab993d" satisfied condition "success or failure" Jan 26 13:18:26.667: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-38da7528-2228-4678-ac1f-0bcf3bab993d container client-container: STEP: delete the pod Jan 26 13:18:26.719: INFO: Waiting for pod downwardapi-volume-38da7528-2228-4678-ac1f-0bcf3bab993d to disappear Jan 26 13:18:26.842: INFO: Pod downwardapi-volume-38da7528-2228-4678-ac1f-0bcf3bab993d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:18:26.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7667" for this suite. Jan 26 13:18:32.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:18:33.034: INFO: namespace downward-api-7667 deletion completed in 6.179744937s • [SLOW TEST:14.567 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:18:33.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:19:03.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2543" for this suite. Jan 26 13:19:09.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:19:09.840: INFO: namespace namespaces-2543 deletion completed in 6.180197737s STEP: Destroying namespace "nsdeletetest-1289" for this suite. Jan 26 13:19:09.843: INFO: Namespace nsdeletetest-1289 was already deleted STEP: Destroying namespace "nsdeletetest-5205" for this suite. Jan 26 13:19:15.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:19:15.997: INFO: namespace nsdeletetest-5205 deletion completed in 6.154258313s • [SLOW TEST:42.961 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:19:15.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 26 13:19:16.095: INFO: Number of nodes with available pods: 0 Jan 26 13:19:16.095: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:19:18.558: INFO: Number of nodes with available pods: 0 Jan 26 13:19:18.558: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:19:19.180: INFO: Number of nodes with available pods: 0 Jan 26 13:19:19.180: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:19:20.107: INFO: Number of nodes with available pods: 0 Jan 26 13:19:20.107: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:19:21.111: INFO: Number of nodes with available pods: 0 Jan 26 13:19:21.111: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:19:22.106: INFO: Number of nodes with available pods: 0 Jan 26 13:19:22.106: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:19:23.996: INFO: Number of nodes with available pods: 0 Jan 26 13:19:23.996: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:19:24.117: INFO: Number of nodes with available pods: 0 Jan 26 13:19:24.117: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:19:25.295: INFO: Number of nodes with available pods: 0 Jan 26 13:19:25.295: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:19:26.138: INFO: Number of nodes with available pods: 1 Jan 26 13:19:26.138: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:19:27.112: INFO: Number of nodes with available pods: 2 Jan 26 13:19:27.112: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 26 13:19:27.148: INFO: Number of nodes with available pods: 2 Jan 26 13:19:27.148: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6757, will wait for the garbage collector to delete the pods Jan 26 13:19:28.269: INFO: Deleting DaemonSet.extensions daemon-set took: 9.468797ms Jan 26 13:19:28.569: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.574919ms Jan 26 13:19:34.721: INFO: Number of nodes with available pods: 0 Jan 26 13:19:34.722: INFO: Number of running nodes: 0, number of available pods: 0 Jan 26 13:19:34.735: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6757/daemonsets","resourceVersion":"21935238"},"items":null} Jan 26 13:19:34.741: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6757/pods","resourceVersion":"21935238"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:19:34.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6757" for this suite. Jan 26 13:19:40.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:19:40.925: INFO: namespace daemonsets-6757 deletion completed in 6.156930355s • [SLOW TEST:24.927 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:19:40.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 26 13:19:41.052: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 26 13:19:46.069: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 26 13:19:50.085: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 26 13:19:52.138: INFO: Creating deployment "test-rollover-deployment" Jan 26 13:19:52.154: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 26 13:19:54.166: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 26 13:19:54.175: INFO: Ensure that both replica sets have 1 created replica Jan 26 13:19:54.181: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 26 13:19:54.195: INFO: Updating deployment test-rollover-deployment Jan 26 13:19:54.195: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 26 13:19:56.223: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 26 13:19:56.232: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 26 13:19:56.242: INFO: all replica sets need to contain the pod-template-hash label Jan 26 13:19:56.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:19:58.256: INFO: all replica sets need to contain the pod-template-hash label Jan 26 13:19:58.257: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:20:00.256: INFO: all replica sets need to contain the pod-template-hash label Jan 26 13:20:00.256: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:20:02.253: INFO: all replica sets need to contain the pod-template-hash label Jan 26 13:20:02.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:20:04.256: INFO: all replica sets need to contain the pod-template-hash label Jan 26 13:20:04.257: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641603, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:20:06.257: INFO: all replica sets need to contain the pod-template-hash label Jan 26 13:20:06.257: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641603, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:20:08.255: INFO: all replica sets need to contain the pod-template-hash label Jan 26 13:20:08.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641603, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:20:10.260: INFO: all replica sets need to contain the pod-template-hash label Jan 26 13:20:10.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641603, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:20:12.255: INFO: all replica sets need to contain the pod-template-hash label Jan 26 13:20:12.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641603, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641592, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:20:14.258: INFO: Jan 26 13:20:14.258: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 26 13:20:14.266: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-6030,SelfLink:/apis/apps/v1/namespaces/deployment-6030/deployments/test-rollover-deployment,UID:6f7fd603-ac1b-44d3-8fd1-8e4d67dc914d,ResourceVersion:21935382,Generation:2,CreationTimestamp:2020-01-26 13:19:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-26 13:19:52 +0000 UTC 2020-01-26 13:19:52 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-26 13:20:14 +0000 UTC 2020-01-26 13:19:52 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 26 13:20:14.271: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-6030,SelfLink:/apis/apps/v1/namespaces/deployment-6030/replicasets/test-rollover-deployment-854595fc44,UID:c181d607-3467-4e88-8624-64585f1fd832,ResourceVersion:21935372,Generation:2,CreationTimestamp:2020-01-26 13:19:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6f7fd603-ac1b-44d3-8fd1-8e4d67dc914d 0xc002020e97 0xc002020e98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 26 13:20:14.271: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 26 13:20:14.271: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-6030,SelfLink:/apis/apps/v1/namespaces/deployment-6030/replicasets/test-rollover-controller,UID:836ed131-bfcd-4eb3-83fe-c020088baa6d,ResourceVersion:21935381,Generation:2,CreationTimestamp:2020-01-26 13:19:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6f7fd603-ac1b-44d3-8fd1-8e4d67dc914d 0xc002020daf 0xc002020dc0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 26 13:20:14.272: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-6030,SelfLink:/apis/apps/v1/namespaces/deployment-6030/replicasets/test-rollover-deployment-9b8b997cf,UID:6d2d7ec2-bdeb-4596-b405-4c48360eb4b1,ResourceVersion:21935336,Generation:2,CreationTimestamp:2020-01-26 13:19:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6f7fd603-ac1b-44d3-8fd1-8e4d67dc914d 0xc002020f60 0xc002020f61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 26 13:20:14.277: INFO: Pod "test-rollover-deployment-854595fc44-r7fbq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-r7fbq,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-6030,SelfLink:/api/v1/namespaces/deployment-6030/pods/test-rollover-deployment-854595fc44-r7fbq,UID:2b3b9446-492b-4b52-8216-c65ea4746427,ResourceVersion:21935356,Generation:0,CreationTimestamp:2020-01-26 13:19:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 c181d607-3467-4e88-8624-64585f1fd832 0xc0029e7127 0xc0029e7128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2q8dc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2q8dc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-2q8dc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029e71a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029e71c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:19:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:19:54 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-26 13:19:54 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-26 13:20:02 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://9844016572fb531c3c3fd201993dde9d44ac40e2cafa688fdc0229a13c188356}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:20:14.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6030" for this suite. Jan 26 13:20:22.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:20:22.486: INFO: namespace deployment-6030 deletion completed in 8.199359694s • [SLOW TEST:41.561 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:20:22.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 26 13:20:22.626: INFO: Creating deployment "nginx-deployment" Jan 26 13:20:22.641: INFO: Waiting for observed generation 1 Jan 26 13:20:25.284: INFO: Waiting for all required pods to come up Jan 26 13:20:25.913: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 26 13:20:50.147: INFO: Waiting for deployment "nginx-deployment" to complete Jan 26 13:20:50.158: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 26 13:20:50.170: INFO: Updating deployment nginx-deployment Jan 26 13:20:50.170: INFO: Waiting for observed generation 2 Jan 26 13:20:53.105: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 26 13:20:53.132: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 26 13:20:53.611: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 26 13:20:53.634: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 26 13:20:53.634: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 26 13:20:53.638: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 26 13:20:53.645: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 26 13:20:53.645: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 26 13:20:53.678: INFO: Updating deployment nginx-deployment Jan 26 13:20:53.678: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 26 13:20:54.033: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 26 13:21:01.976: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 26 13:21:06.442: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-1109,SelfLink:/apis/apps/v1/namespaces/deployment-1109/deployments/nginx-deployment,UID:77c0c915-6d64-4506-8602-9d9fcf130cde,ResourceVersion:21935714,Generation:3,CreationTimestamp:2020-01-26 13:20:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-01-26 13:20:53 +0000 UTC 2020-01-26 13:20:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-26 13:20:59 +0000 UTC 2020-01-26 13:20:22 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 26 13:21:06.980: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-1109,SelfLink:/apis/apps/v1/namespaces/deployment-1109/replicasets/nginx-deployment-55fb7cb77f,UID:a2178f82-a0b8-41b3-8a02-8a4c97de7a16,ResourceVersion:21935686,Generation:3,CreationTimestamp:2020-01-26 13:20:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 77c0c915-6d64-4506-8602-9d9fcf130cde 0xc002f3a4b7 0xc002f3a4b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 26 13:21:06.980: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 26 13:21:06.980: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-1109,SelfLink:/apis/apps/v1/namespaces/deployment-1109/replicasets/nginx-deployment-7b8c6f4498,UID:f6805fb1-87e3-46a0-80f1-1e986f92386a,ResourceVersion:21935710,Generation:3,CreationTimestamp:2020-01-26 13:20:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 77c0c915-6d64-4506-8602-9d9fcf130cde 0xc002f3a587 0xc002f3a588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 26 13:21:08.239: INFO: Pod "nginx-deployment-55fb7cb77f-9p4jr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9p4jr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-55fb7cb77f-9p4jr,UID:b8c2d572-de16-4608-b5ac-ab1c6242933b,ResourceVersion:21935634,Generation:0,CreationTimestamp:2020-01-26 13:20:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a2178f82-a0b8-41b3-8a02-8a4c97de7a16 0xc0022f8c57 0xc0022f8c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022f8cd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022f8d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-26 13:20:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.240: INFO: Pod "nginx-deployment-55fb7cb77f-cfgn4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cfgn4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-55fb7cb77f-cfgn4,UID:80376e4d-c97c-480c-9ce6-37483d24f554,ResourceVersion:21935721,Generation:0,CreationTimestamp:2020-01-26 13:20:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a2178f82-a0b8-41b3-8a02-8a4c97de7a16 0xc0022f8dd7 0xc0022f8dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022f8e40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022f8e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-26 13:20:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.240: INFO: Pod "nginx-deployment-55fb7cb77f-d9kl6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-d9kl6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-55fb7cb77f-d9kl6,UID:a728794d-df31-411e-99d6-f410b2af38ad,ResourceVersion:21935628,Generation:0,CreationTimestamp:2020-01-26 13:20:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a2178f82-a0b8-41b3-8a02-8a4c97de7a16 0xc0022f8f37 0xc0022f8f38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022f8fa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022f8fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-26 13:20:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.240: INFO: Pod "nginx-deployment-55fb7cb77f-hfxp8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hfxp8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-55fb7cb77f-hfxp8,UID:0cc89bcc-896e-405b-b419-7e1cd530fcdb,ResourceVersion:21935720,Generation:0,CreationTimestamp:2020-01-26 13:20:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a2178f82-a0b8-41b3-8a02-8a4c97de7a16 0xc0022f9097 0xc0022f9098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022f9140} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022f9160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-26 13:20:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.240: INFO: Pod "nginx-deployment-55fb7cb77f-k4djz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-k4djz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-55fb7cb77f-k4djz,UID:0d18808a-c298-4010-9c11-56931e2e3469,ResourceVersion:21935623,Generation:0,CreationTimestamp:2020-01-26 13:20:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a2178f82-a0b8-41b3-8a02-8a4c97de7a16 0xc0022f9247 0xc0022f9248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022f92c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022f92e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-26 13:20:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.240: INFO: Pod "nginx-deployment-55fb7cb77f-mxm5c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mxm5c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-55fb7cb77f-mxm5c,UID:542e3a60-f49f-4d8e-a497-0b7eddb3aa90,ResourceVersion:21935674,Generation:0,CreationTimestamp:2020-01-26 13:20:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a2178f82-a0b8-41b3-8a02-8a4c97de7a16 0xc0022f93b7 0xc0022f93b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022f9430} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022f9450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.240: INFO: Pod "nginx-deployment-55fb7cb77f-p6hkl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-p6hkl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-55fb7cb77f-p6hkl,UID:0f05cf90-49d6-4b11-92c4-687fd6a5d9bd,ResourceVersion:21935662,Generation:0,CreationTimestamp:2020-01-26 13:20:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a2178f82-a0b8-41b3-8a02-8a4c97de7a16 0xc0022f94d7 0xc0022f94d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022f9540} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022f9560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.241: INFO: Pod "nginx-deployment-55fb7cb77f-pq6h5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pq6h5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-55fb7cb77f-pq6h5,UID:75f8281b-346e-4bf4-9221-a66afc0731ac,ResourceVersion:21935689,Generation:0,CreationTimestamp:2020-01-26 13:20:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a2178f82-a0b8-41b3-8a02-8a4c97de7a16 0xc0022f95e7 0xc0022f95e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022f9660} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022f9680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-26 13:20:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.241: INFO: Pod "nginx-deployment-55fb7cb77f-rlwz6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rlwz6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-55fb7cb77f-rlwz6,UID:0d1ca4b7-bfe2-46d0-a812-7067d04fcc1d,ResourceVersion:21935670,Generation:0,CreationTimestamp:2020-01-26 13:20:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a2178f82-a0b8-41b3-8a02-8a4c97de7a16 0xc0022f9757 0xc0022f9758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022f97d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022f97f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.241: INFO: Pod "nginx-deployment-55fb7cb77f-tlpth" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tlpth,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-55fb7cb77f-tlpth,UID:c548b725-5b57-4cf6-9229-091b94d89b4e,ResourceVersion:21935606,Generation:0,CreationTimestamp:2020-01-26 13:20:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a2178f82-a0b8-41b3-8a02-8a4c97de7a16 0xc0022f9877 0xc0022f9878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022f98e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022f9900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-26 13:20:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.241: INFO: Pod "nginx-deployment-55fb7cb77f-tp4qj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tp4qj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-55fb7cb77f-tp4qj,UID:753dace2-77dc-4979-b4bd-69fa151b47ff,ResourceVersion:21935706,Generation:0,CreationTimestamp:2020-01-26 13:20:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a2178f82-a0b8-41b3-8a02-8a4c97de7a16 0xc0022f99d7 0xc0022f99d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022f9a50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022f9a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-26 13:20:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.241: INFO: Pod "nginx-deployment-55fb7cb77f-whjmm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-whjmm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-55fb7cb77f-whjmm,UID:6d309bc1-8e7f-4647-96b5-57669a765628,ResourceVersion:21935602,Generation:0,CreationTimestamp:2020-01-26 13:20:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a2178f82-a0b8-41b3-8a02-8a4c97de7a16 0xc0022f9b47 0xc0022f9b48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022f9bc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022f9bf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:50 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-26 13:20:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.242: INFO: Pod "nginx-deployment-55fb7cb77f-zj4ws" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zj4ws,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-55fb7cb77f-zj4ws,UID:7e82dd92-a515-4d77-8b71-3a348ba2a336,ResourceVersion:21935695,Generation:0,CreationTimestamp:2020-01-26 13:20:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f a2178f82-a0b8-41b3-8a02-8a4c97de7a16 0xc0022f9cc7 0xc0022f9cc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022f9d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022f9d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:54 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-26 13:20:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.242: INFO: Pod "nginx-deployment-7b8c6f4498-5z4mp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5z4mp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-5z4mp,UID:4a4283cb-cbd3-4f27-8a6c-7836f2113c70,ResourceVersion:21935567,Generation:0,CreationTimestamp:2020-01-26 13:20:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc0022f9e27 0xc0022f9e28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022f9e90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022f9eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-26 13:20:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 13:20:46 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://05543afada566cf17eea7f6a390524fb681ca98d8574f546ab32a2fd390cf251}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.242: INFO: Pod "nginx-deployment-7b8c6f4498-6ncq2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6ncq2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-6ncq2,UID:951fae08-5a4c-4b57-9741-130cdabfc9d3,ResourceVersion:21935692,Generation:0,CreationTimestamp:2020-01-26 13:20:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc0022f9f87 0xc0022f9f88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022f9ff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250e010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.242: INFO: Pod "nginx-deployment-7b8c6f4498-b865j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b865j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-b865j,UID:6bd29578-d51c-4305-9beb-603369f8a1b2,ResourceVersion:21935690,Generation:0,CreationTimestamp:2020-01-26 13:20:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250e0a7 0xc00250e0a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250e120} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250e140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.242: INFO: Pod "nginx-deployment-7b8c6f4498-b9f5t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b9f5t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-b9f5t,UID:a22e76fe-1678-4f09-9baa-865324851188,ResourceVersion:21935540,Generation:0,CreationTimestamp:2020-01-26 13:20:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250e1c7 0xc00250e1c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250e240} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250e260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-26 13:20:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 13:20:44 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9e85c256f9c0ce1755ec5d6d2b031efc647cc89562f77baa43ed2af6befab61f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.242: INFO: Pod "nginx-deployment-7b8c6f4498-bmrdl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bmrdl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-bmrdl,UID:13185571-f3fb-4b6d-867c-76804ef323a7,ResourceVersion:21935550,Generation:0,CreationTimestamp:2020-01-26 13:20:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250e337 0xc00250e338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250e3b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250e3d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-26 13:20:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 13:20:44 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b1a8e8bba7b6541de145d0def12d3cc49113d031bee325148bbb5c657d169ea4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.242: INFO: Pod "nginx-deployment-7b8c6f4498-fxj5h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fxj5h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-fxj5h,UID:ce582e8c-6762-4a54-a31f-b64200c07a4b,ResourceVersion:21935671,Generation:0,CreationTimestamp:2020-01-26 13:20:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250e4a7 0xc00250e4a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250e520} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250e540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.243: INFO: Pod "nginx-deployment-7b8c6f4498-hl726" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hl726,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-hl726,UID:a6ca5571-dcf3-4d29-a39d-75c7a30e4c35,ResourceVersion:21935688,Generation:0,CreationTimestamp:2020-01-26 13:20:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250e5c7 0xc00250e5c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250e630} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250e650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.243: INFO: Pod "nginx-deployment-7b8c6f4498-hqszl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hqszl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-hqszl,UID:3d924caa-6970-46ed-b303-bde2b3a4dec0,ResourceVersion:21935681,Generation:0,CreationTimestamp:2020-01-26 13:20:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250e6d7 0xc00250e6d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250e740} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250e760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.243: INFO: Pod "nginx-deployment-7b8c6f4498-hzxwz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hzxwz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-hzxwz,UID:0562424b-2535-4da0-a662-596c93bd20fd,ResourceVersion:21935691,Generation:0,CreationTimestamp:2020-01-26 13:20:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250e7e7 0xc00250e7e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250e850} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250e870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.243: INFO: Pod "nginx-deployment-7b8c6f4498-j6vq4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j6vq4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-j6vq4,UID:1e0a75de-1998-4d4d-b689-ca88ef6aa7a2,ResourceVersion:21935560,Generation:0,CreationTimestamp:2020-01-26 13:20:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250e8f7 0xc00250e8f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250e960} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250e980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2020-01-26 13:20:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 13:20:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://52f4520ca1021998cb8b6260be6ece0f96c448ba85d0551fe909a6bd97c5cae3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.244: INFO: Pod "nginx-deployment-7b8c6f4498-jfbl2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jfbl2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-jfbl2,UID:e9d00aae-ed29-492f-b017-ba310482d245,ResourceVersion:21935708,Generation:0,CreationTimestamp:2020-01-26 13:20:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250ea57 0xc00250ea58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250eac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250eae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-26 13:20:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.244: INFO: Pod "nginx-deployment-7b8c6f4498-k8z67" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-k8z67,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-k8z67,UID:12b3aff0-3b00-4bc0-8d61-f0b666f85504,ResourceVersion:21935544,Generation:0,CreationTimestamp:2020-01-26 13:20:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250eba7 0xc00250eba8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250ec20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250ec40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-26 13:20:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 13:20:45 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b725b758b7088be45c37eb65daaa7b6c243b85556908fa275c6bfdf3d84b8a16}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.244: INFO: Pod "nginx-deployment-7b8c6f4498-lsg4w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lsg4w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-lsg4w,UID:a568c2c1-90b8-4345-808c-430b31f07d9c,ResourceVersion:21935687,Generation:0,CreationTimestamp:2020-01-26 13:20:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250ed17 0xc00250ed18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250ed90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250edb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.244: INFO: Pod "nginx-deployment-7b8c6f4498-n7fbx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n7fbx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-n7fbx,UID:056a503d-4fe6-4379-bf5e-305123110298,ResourceVersion:21935547,Generation:0,CreationTimestamp:2020-01-26 13:20:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250ee37 0xc00250ee38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250eeb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250eed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-26 13:20:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 13:20:45 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f593e40ab8c5dc69907228498c84ef338291255c376ec85ed2e417ba37438e3f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.244: INFO: Pod "nginx-deployment-7b8c6f4498-tbbrl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tbbrl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-tbbrl,UID:e63d64a8-91c1-41c1-8d0e-697065c290af,ResourceVersion:21935669,Generation:0,CreationTimestamp:2020-01-26 13:20:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250efa7 0xc00250efa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250f010} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250f030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.244: INFO: Pod "nginx-deployment-7b8c6f4498-tnj85" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tnj85,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-tnj85,UID:873c7b4f-7ff5-4d0d-9d70-e5df65e6472f,ResourceVersion:21935683,Generation:0,CreationTimestamp:2020-01-26 13:20:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250f0b7 0xc00250f0b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250f130} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250f150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.244: INFO: Pod "nginx-deployment-7b8c6f4498-v8226" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v8226,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-v8226,UID:e5d2694e-2418-4f89-bbbf-7ce80c663abf,ResourceVersion:21935536,Generation:0,CreationTimestamp:2020-01-26 13:20:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250f1d7 0xc00250f1d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250f250} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250f270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-26 13:20:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 13:20:45 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://49df10198d45186a6951ca8b743b63816faca87c545dd8785e098380f6a00bb6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.244: INFO: Pod "nginx-deployment-7b8c6f4498-vl6jf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vl6jf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-vl6jf,UID:a0a5cd24-bdbd-4f41-9856-2ba49de4f337,ResourceVersion:21935680,Generation:0,CreationTimestamp:2020-01-26 13:20:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250f347 0xc00250f348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250f3b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250f3d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.245: INFO: Pod "nginx-deployment-7b8c6f4498-xn8jw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xn8jw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-xn8jw,UID:32609eb1-821c-44ed-b413-6fe673e85a8e,ResourceVersion:21935682,Generation:0,CreationTimestamp:2020-01-26 13:20:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250f457 0xc00250f458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250f4d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250f4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 26 13:21:08.245: INFO: Pod "nginx-deployment-7b8c6f4498-zqrfg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zqrfg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1109,SelfLink:/api/v1/namespaces/deployment-1109/pods/nginx-deployment-7b8c6f4498-zqrfg,UID:7ca09332-b897-4a88-bc8b-0d700795ef78,ResourceVersion:21935562,Generation:0,CreationTimestamp:2020-01-26 13:20:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 f6805fb1-87e3-46a0-80f1-1e986f92386a 0xc00250f577 0xc00250f578}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c8kmz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c8kmz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c8kmz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00250f5e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00250f600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:20:22 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-26 13:20:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-26 13:20:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://61ac3d45f12ff38502f8b173934703e6a035cf89f0c47a5619b77303eb7ccde2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:21:08.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1109" for this suite. Jan 26 13:22:09.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:22:11.894: INFO: namespace deployment-1109 deletion completed in 1m2.99090305s • [SLOW TEST:109.408 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:22:11.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 26 13:22:14.967: INFO: Creating deployment "test-recreate-deployment" Jan 26 13:22:15.011: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 26 13:22:15.432: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 26 13:22:17.806: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 26 13:22:17.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:22:19.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:22:22.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:22:23.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:22:26.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:22:28.404: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:22:30.730: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:22:31.858: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:22:33.858: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:22:35.835: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:22:37.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641736, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715641735, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:22:39.868: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 26 13:22:39.893: INFO: Updating deployment test-recreate-deployment Jan 26 13:22:39.893: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 26 13:22:40.261: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-9473,SelfLink:/apis/apps/v1/namespaces/deployment-9473/deployments/test-recreate-deployment,UID:6b20da38-1e47-4a10-bdfc-6f8ffbe3a072,ResourceVersion:21936123,Generation:2,CreationTimestamp:2020-01-26 13:22:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-26 13:22:40 +0000 UTC 2020-01-26 13:22:40 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-26 13:22:40 +0000 UTC 2020-01-26 13:22:15 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jan 26 13:22:40.269: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-9473,SelfLink:/apis/apps/v1/namespaces/deployment-9473/replicasets/test-recreate-deployment-5c8c9cc69d,UID:f773b93a-013d-4ab6-b588-0fb1a8d949de,ResourceVersion:21936120,Generation:1,CreationTimestamp:2020-01-26 13:22:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6b20da38-1e47-4a10-bdfc-6f8ffbe3a072 0xc000af8b07 0xc000af8b08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 26 13:22:40.269: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 26 13:22:40.269: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-9473,SelfLink:/apis/apps/v1/namespaces/deployment-9473/replicasets/test-recreate-deployment-6df85df6b9,UID:be4e12ba-8224-482b-93bd-43a407ff313a,ResourceVersion:21936112,Generation:2,CreationTimestamp:2020-01-26 13:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6b20da38-1e47-4a10-bdfc-6f8ffbe3a072 0xc000af8bf7 0xc000af8bf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 26 13:22:40.274: INFO: Pod "test-recreate-deployment-5c8c9cc69d-qhztp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-qhztp,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-9473,SelfLink:/api/v1/namespaces/deployment-9473/pods/test-recreate-deployment-5c8c9cc69d-qhztp,UID:ee2d3956-3bbe-4d21-9df5-55e059225c07,ResourceVersion:21936119,Generation:0,CreationTimestamp:2020-01-26 13:22:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d f773b93a-013d-4ab6-b588-0fb1a8d949de 0xc000af9df7 0xc000af9df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-n6l2h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6l2h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n6l2h true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000af9f20} {node.kubernetes.io/unreachable Exists NoExecute 0xc000af9f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:22:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:22:40.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9473" for this suite. Jan 26 13:22:46.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:22:46.488: INFO: namespace deployment-9473 deletion completed in 6.210512569s • [SLOW TEST:34.593 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:22:46.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-f6d5fbb7-2615-4315-b573-4ff3d0922edc STEP: Creating secret with name s-test-opt-upd-42a0b762-e89a-4566-95ea-feb1362079d9 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f6d5fbb7-2615-4315-b573-4ff3d0922edc STEP: Updating secret s-test-opt-upd-42a0b762-e89a-4566-95ea-feb1362079d9 STEP: Creating secret with name s-test-opt-create-e22b1b85-0fb0-4896-8037-f1cf884ceb07 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:23:07.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6104" for this suite. Jan 26 13:23:21.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:23:21.453: INFO: namespace projected-6104 deletion completed in 14.279711618s • [SLOW TEST:34.963 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:23:21.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:24:13.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5226" for this suite. Jan 26 13:24:19.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:24:20.084: INFO: namespace container-runtime-5226 deletion completed in 6.140838993s • [SLOW TEST:58.631 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:24:20.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-54fd0398-3dcd-4f08-903d-3ac2682f6c88 in namespace container-probe-4894 Jan 26 13:24:28.219: INFO: Started pod liveness-54fd0398-3dcd-4f08-903d-3ac2682f6c88 in namespace container-probe-4894 STEP: checking the pod's current state and verifying that restartCount is present Jan 26 13:24:28.223: INFO: Initial restart count of pod liveness-54fd0398-3dcd-4f08-903d-3ac2682f6c88 is 0 Jan 26 13:24:50.420: INFO: Restart count of pod container-probe-4894/liveness-54fd0398-3dcd-4f08-903d-3ac2682f6c88 is now 1 (22.196840708s elapsed) Jan 26 13:25:10.669: INFO: Restart count of pod container-probe-4894/liveness-54fd0398-3dcd-4f08-903d-3ac2682f6c88 is now 2 (42.446002715s elapsed) Jan 26 13:25:30.812: INFO: Restart count of pod container-probe-4894/liveness-54fd0398-3dcd-4f08-903d-3ac2682f6c88 is now 3 (1m2.58908067s elapsed) Jan 26 13:25:48.925: INFO: Restart count of pod container-probe-4894/liveness-54fd0398-3dcd-4f08-903d-3ac2682f6c88 is now 4 (1m20.702444266s elapsed) Jan 26 13:26:51.276: INFO: Restart count of pod container-probe-4894/liveness-54fd0398-3dcd-4f08-903d-3ac2682f6c88 is now 5 (2m23.052739308s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:26:51.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4894" for this suite. Jan 26 13:26:57.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:26:57.642: INFO: namespace container-probe-4894 deletion completed in 6.289133037s • [SLOW TEST:157.558 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:26:57.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 26 13:26:57.792: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 26 13:26:57.809: INFO: Waiting for terminating namespaces to be deleted... Jan 26 13:26:57.814: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 26 13:26:57.835: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 26 13:26:57.835: INFO: Container kube-proxy ready: true, restart count 0 Jan 26 13:26:57.835: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 26 13:26:57.835: INFO: Container weave ready: true, restart count 0 Jan 26 13:26:57.835: INFO: Container weave-npc ready: true, restart count 0 Jan 26 13:26:57.835: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 26 13:26:57.848: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 26 13:26:57.848: INFO: Container etcd ready: true, restart count 0 Jan 26 13:26:57.848: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 26 13:26:57.848: INFO: Container weave ready: true, restart count 0 Jan 26 13:26:57.848: INFO: Container weave-npc ready: true, restart count 0 Jan 26 13:26:57.848: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 26 13:26:57.848: INFO: Container coredns ready: true, restart count 0 Jan 26 13:26:57.848: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 26 13:26:57.848: INFO: Container kube-controller-manager ready: true, restart count 19 Jan 26 13:26:57.848: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 26 13:26:57.848: INFO: Container kube-proxy ready: true, restart count 0 Jan 26 13:26:57.848: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 26 13:26:57.849: INFO: Container kube-apiserver ready: true, restart count 0 Jan 26 13:26:57.849: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 26 13:26:57.849: INFO: Container kube-scheduler ready: true, restart count 13 Jan 26 13:26:57.849: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 26 13:26:57.849: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e1205030-9433-45cc-a4b4-a828bf21cabd 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-e1205030-9433-45cc-a4b4-a828bf21cabd off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-e1205030-9433-45cc-a4b4-a828bf21cabd [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:27:16.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3353" for this suite. Jan 26 13:27:30.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:27:30.330: INFO: namespace sched-pred-3353 deletion completed in 14.195900221s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:32.688 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:27:30.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-863e484d-9e91-4b4d-a12e-2ba30bd06f26 STEP: Creating a pod to test consume secrets Jan 26 13:27:30.412: INFO: Waiting up to 5m0s for pod "pod-secrets-a2fec941-b1e3-4aeb-ad38-ba20d8e9e49d" in namespace "secrets-2396" to be "success or failure" Jan 26 13:27:30.448: INFO: Pod "pod-secrets-a2fec941-b1e3-4aeb-ad38-ba20d8e9e49d": Phase="Pending", Reason="", readiness=false. Elapsed: 36.250114ms Jan 26 13:27:32.463: INFO: Pod "pod-secrets-a2fec941-b1e3-4aeb-ad38-ba20d8e9e49d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05074577s Jan 26 13:27:34.478: INFO: Pod "pod-secrets-a2fec941-b1e3-4aeb-ad38-ba20d8e9e49d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065525724s Jan 26 13:27:36.489: INFO: Pod "pod-secrets-a2fec941-b1e3-4aeb-ad38-ba20d8e9e49d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077085762s Jan 26 13:27:38.505: INFO: Pod "pod-secrets-a2fec941-b1e3-4aeb-ad38-ba20d8e9e49d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092942699s STEP: Saw pod success Jan 26 13:27:38.505: INFO: Pod "pod-secrets-a2fec941-b1e3-4aeb-ad38-ba20d8e9e49d" satisfied condition "success or failure" Jan 26 13:27:38.516: INFO: Trying to get logs from node iruya-node pod pod-secrets-a2fec941-b1e3-4aeb-ad38-ba20d8e9e49d container secret-env-test: STEP: delete the pod Jan 26 13:27:38.587: INFO: Waiting for pod pod-secrets-a2fec941-b1e3-4aeb-ad38-ba20d8e9e49d to disappear Jan 26 13:27:38.592: INFO: Pod pod-secrets-a2fec941-b1e3-4aeb-ad38-ba20d8e9e49d no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:27:38.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2396" for this suite. Jan 26 13:27:44.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:27:44.800: INFO: namespace secrets-2396 deletion completed in 6.201845532s • [SLOW TEST:14.470 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:27:44.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-6d5b675f-0449-4369-af9c-77adcb017337 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:27:44.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5563" for this suite. Jan 26 13:27:50.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:27:51.084: INFO: namespace secrets-5563 deletion completed in 6.132694755s • [SLOW TEST:6.284 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:27:51.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4196 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jan 26 13:27:51.164: INFO: Found 0 stateful pods, waiting for 3 Jan 26 13:28:01.176: INFO: Found 2 stateful pods, waiting for 3 Jan 26 13:28:11.175: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 26 13:28:11.175: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 26 13:28:11.175: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 26 13:28:21.185: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 26 13:28:21.185: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 26 13:28:21.185: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 26 13:28:21.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4196 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 26 13:28:24.027: INFO: stderr: "I0126 13:28:23.235506 478 log.go:172] (0xc000710420) (0xc0005be8c0) Create stream\nI0126 13:28:23.235615 478 log.go:172] (0xc000710420) (0xc0005be8c0) Stream added, broadcasting: 1\nI0126 13:28:23.243804 478 log.go:172] (0xc000710420) Reply frame received for 1\nI0126 13:28:23.243868 478 log.go:172] (0xc000710420) (0xc0007300a0) Create stream\nI0126 13:28:23.243895 478 log.go:172] (0xc000710420) (0xc0007300a0) Stream added, broadcasting: 3\nI0126 13:28:23.246279 478 log.go:172] (0xc000710420) Reply frame received for 3\nI0126 13:28:23.246329 478 log.go:172] (0xc000710420) (0xc0009ea000) Create stream\nI0126 13:28:23.246352 478 log.go:172] (0xc000710420) (0xc0009ea000) Stream added, broadcasting: 5\nI0126 13:28:23.255729 478 log.go:172] (0xc000710420) Reply frame received for 5\nI0126 13:28:23.434572 478 log.go:172] (0xc000710420) Data frame received for 5\nI0126 13:28:23.434634 478 log.go:172] (0xc0009ea000) (5) Data frame handling\nI0126 13:28:23.434657 478 log.go:172] (0xc0009ea000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0126 13:28:23.908658 478 log.go:172] (0xc000710420) Data frame received for 3\nI0126 13:28:23.909197 478 log.go:172] (0xc0007300a0) (3) Data frame handling\nI0126 13:28:23.909294 478 log.go:172] (0xc0007300a0) (3) Data frame sent\nI0126 13:28:24.012146 478 log.go:172] (0xc000710420) (0xc0007300a0) Stream removed, broadcasting: 3\nI0126 13:28:24.012213 478 log.go:172] (0xc000710420) Data frame received for 1\nI0126 13:28:24.012232 478 log.go:172] (0xc0005be8c0) (1) Data frame handling\nI0126 13:28:24.012243 478 log.go:172] (0xc0005be8c0) (1) Data frame sent\nI0126 13:28:24.012251 478 log.go:172] (0xc000710420) (0xc0005be8c0) Stream removed, broadcasting: 1\nI0126 13:28:24.012324 478 log.go:172] (0xc000710420) (0xc0009ea000) Stream removed, broadcasting: 5\nI0126 13:28:24.012431 478 log.go:172] (0xc000710420) Go away received\nI0126 13:28:24.013374 478 log.go:172] (0xc000710420) (0xc0005be8c0) Stream removed, broadcasting: 1\nI0126 13:28:24.013574 478 log.go:172] (0xc000710420) (0xc0007300a0) Stream removed, broadcasting: 3\nI0126 13:28:24.013596 478 log.go:172] (0xc000710420) (0xc0009ea000) Stream removed, broadcasting: 5\n" Jan 26 13:28:24.028: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 26 13:28:24.028: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 26 13:28:34.128: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 26 13:28:44.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4196 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 26 13:28:44.657: INFO: stderr: "I0126 13:28:44.459711 508 log.go:172] (0xc00096e370) (0xc0008806e0) Create stream\nI0126 13:28:44.459960 508 log.go:172] (0xc00096e370) (0xc0008806e0) Stream added, broadcasting: 1\nI0126 13:28:44.463464 508 log.go:172] (0xc00096e370) Reply frame received for 1\nI0126 13:28:44.463515 508 log.go:172] (0xc00096e370) (0xc000566320) Create stream\nI0126 13:28:44.463522 508 log.go:172] (0xc00096e370) (0xc000566320) Stream added, broadcasting: 3\nI0126 13:28:44.464578 508 log.go:172] (0xc00096e370) Reply frame received for 3\nI0126 13:28:44.464595 508 log.go:172] (0xc00096e370) (0xc0005663c0) Create stream\nI0126 13:28:44.464600 508 log.go:172] (0xc00096e370) (0xc0005663c0) Stream added, broadcasting: 5\nI0126 13:28:44.465456 508 log.go:172] (0xc00096e370) Reply frame received for 5\nI0126 13:28:44.581488 508 log.go:172] (0xc00096e370) Data frame received for 5\nI0126 13:28:44.581702 508 log.go:172] (0xc0005663c0) (5) Data frame handling\nI0126 13:28:44.581727 508 log.go:172] (0xc0005663c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0126 13:28:44.581974 508 log.go:172] (0xc00096e370) Data frame received for 3\nI0126 13:28:44.582185 508 log.go:172] (0xc000566320) (3) Data frame handling\nI0126 13:28:44.582251 508 log.go:172] (0xc000566320) (3) Data frame sent\nI0126 13:28:44.649752 508 log.go:172] (0xc00096e370) (0xc000566320) Stream removed, broadcasting: 3\nI0126 13:28:44.649869 508 log.go:172] (0xc00096e370) Data frame received for 1\nI0126 13:28:44.649912 508 log.go:172] (0xc00096e370) (0xc0005663c0) Stream removed, broadcasting: 5\nI0126 13:28:44.649981 508 log.go:172] (0xc0008806e0) (1) Data frame handling\nI0126 13:28:44.649999 508 log.go:172] (0xc0008806e0) (1) Data frame sent\nI0126 13:28:44.650010 508 log.go:172] (0xc00096e370) (0xc0008806e0) Stream removed, broadcasting: 1\nI0126 13:28:44.650023 508 log.go:172] (0xc00096e370) Go away received\nI0126 13:28:44.650930 508 log.go:172] (0xc00096e370) (0xc0008806e0) Stream removed, broadcasting: 1\nI0126 13:28:44.650957 508 log.go:172] (0xc00096e370) (0xc000566320) Stream removed, broadcasting: 3\nI0126 13:28:44.650970 508 log.go:172] (0xc00096e370) (0xc0005663c0) Stream removed, broadcasting: 5\n" Jan 26 13:28:44.657: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 26 13:28:44.657: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 26 13:28:54.691: INFO: Waiting for StatefulSet statefulset-4196/ss2 to complete update Jan 26 13:28:54.691: INFO: Waiting for Pod statefulset-4196/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 26 13:28:54.691: INFO: Waiting for Pod statefulset-4196/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 26 13:28:54.691: INFO: Waiting for Pod statefulset-4196/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 26 13:29:04.771: INFO: Waiting for StatefulSet statefulset-4196/ss2 to complete update Jan 26 13:29:04.771: INFO: Waiting for Pod statefulset-4196/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 26 13:29:04.771: INFO: Waiting for Pod statefulset-4196/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 26 13:29:14.702: INFO: Waiting for StatefulSet statefulset-4196/ss2 to complete update Jan 26 13:29:14.702: INFO: Waiting for Pod statefulset-4196/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 26 13:29:14.702: INFO: Waiting for Pod statefulset-4196/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 26 13:29:24.768: INFO: Waiting for StatefulSet statefulset-4196/ss2 to complete update Jan 26 13:29:24.768: INFO: Waiting for Pod statefulset-4196/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 26 13:29:34.713: INFO: Waiting for StatefulSet statefulset-4196/ss2 to complete update Jan 26 13:29:34.713: INFO: Waiting for Pod statefulset-4196/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Jan 26 13:29:44.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4196 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 26 13:29:45.242: INFO: stderr: "I0126 13:29:44.948767 528 log.go:172] (0xc00098a420) (0xc0003e86e0) Create stream\nI0126 13:29:44.949055 528 log.go:172] (0xc00098a420) (0xc0003e86e0) Stream added, broadcasting: 1\nI0126 13:29:44.952302 528 log.go:172] (0xc00098a420) Reply frame received for 1\nI0126 13:29:44.952335 528 log.go:172] (0xc00098a420) (0xc00060a460) Create stream\nI0126 13:29:44.952344 528 log.go:172] (0xc00098a420) (0xc00060a460) Stream added, broadcasting: 3\nI0126 13:29:44.953561 528 log.go:172] (0xc00098a420) Reply frame received for 3\nI0126 13:29:44.953615 528 log.go:172] (0xc00098a420) (0xc00098c000) Create stream\nI0126 13:29:44.953649 528 log.go:172] (0xc00098a420) (0xc00098c000) Stream added, broadcasting: 5\nI0126 13:29:44.955338 528 log.go:172] (0xc00098a420) Reply frame received for 5\nI0126 13:29:45.066204 528 log.go:172] (0xc00098a420) Data frame received for 5\nI0126 13:29:45.066269 528 log.go:172] (0xc00098c000) (5) Data frame handling\nI0126 13:29:45.066289 528 log.go:172] (0xc00098c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0126 13:29:45.108250 528 log.go:172] (0xc00098a420) Data frame received for 3\nI0126 13:29:45.108290 528 log.go:172] (0xc00060a460) (3) Data frame handling\nI0126 13:29:45.108337 528 log.go:172] (0xc00060a460) (3) Data frame sent\nI0126 13:29:45.226354 528 log.go:172] (0xc00098a420) Data frame received for 1\nI0126 13:29:45.226452 528 log.go:172] (0xc0003e86e0) (1) Data frame handling\nI0126 13:29:45.226478 528 log.go:172] (0xc0003e86e0) (1) Data frame sent\nI0126 13:29:45.226937 528 log.go:172] (0xc00098a420) (0xc0003e86e0) Stream removed, broadcasting: 1\nI0126 13:29:45.227602 528 log.go:172] (0xc00098a420) (0xc00060a460) Stream removed, broadcasting: 3\nI0126 13:29:45.228202 528 log.go:172] (0xc00098a420) (0xc00098c000) Stream removed, broadcasting: 5\nI0126 13:29:45.228311 528 log.go:172] (0xc00098a420) Go away received\nI0126 13:29:45.228700 528 log.go:172] (0xc00098a420) (0xc0003e86e0) Stream removed, broadcasting: 1\nI0126 13:29:45.228788 528 log.go:172] (0xc00098a420) (0xc00060a460) Stream removed, broadcasting: 3\nI0126 13:29:45.228829 528 log.go:172] (0xc00098a420) (0xc00098c000) Stream removed, broadcasting: 5\n" Jan 26 13:29:45.242: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 26 13:29:45.242: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 26 13:29:45.319: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 26 13:29:55.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4196 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 26 13:29:55.817: INFO: stderr: "I0126 13:29:55.629200 550 log.go:172] (0xc000a5c370) (0xc000922780) Create stream\nI0126 13:29:55.629557 550 log.go:172] (0xc000a5c370) (0xc000922780) Stream added, broadcasting: 1\nI0126 13:29:55.635221 550 log.go:172] (0xc000a5c370) Reply frame received for 1\nI0126 13:29:55.635415 550 log.go:172] (0xc000a5c370) (0xc0006a21e0) Create stream\nI0126 13:29:55.635436 550 log.go:172] (0xc000a5c370) (0xc0006a21e0) Stream added, broadcasting: 3\nI0126 13:29:55.638751 550 log.go:172] (0xc000a5c370) Reply frame received for 3\nI0126 13:29:55.638795 550 log.go:172] (0xc000a5c370) (0xc000922820) Create stream\nI0126 13:29:55.638804 550 log.go:172] (0xc000a5c370) (0xc000922820) Stream added, broadcasting: 5\nI0126 13:29:55.640501 550 log.go:172] (0xc000a5c370) Reply frame received for 5\nI0126 13:29:55.727890 550 log.go:172] (0xc000a5c370) Data frame received for 5\nI0126 13:29:55.727956 550 log.go:172] (0xc000922820) (5) Data frame handling\nI0126 13:29:55.727971 550 log.go:172] (0xc000922820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0126 13:29:55.728269 550 log.go:172] (0xc000a5c370) Data frame received for 3\nI0126 13:29:55.728314 550 log.go:172] (0xc0006a21e0) (3) Data frame handling\nI0126 13:29:55.728344 550 log.go:172] (0xc0006a21e0) (3) Data frame sent\nI0126 13:29:55.806780 550 log.go:172] (0xc000a5c370) Data frame received for 1\nI0126 13:29:55.807018 550 log.go:172] (0xc000a5c370) (0xc000922820) Stream removed, broadcasting: 5\nI0126 13:29:55.807062 550 log.go:172] (0xc000922780) (1) Data frame handling\nI0126 13:29:55.807088 550 log.go:172] (0xc000922780) (1) Data frame sent\nI0126 13:29:55.807132 550 log.go:172] (0xc000a5c370) (0xc0006a21e0) Stream removed, broadcasting: 3\nI0126 13:29:55.807153 550 log.go:172] (0xc000a5c370) (0xc000922780) Stream removed, broadcasting: 1\nI0126 13:29:55.807167 550 log.go:172] (0xc000a5c370) Go away received\nI0126 13:29:55.808536 550 log.go:172] (0xc000a5c370) (0xc000922780) Stream removed, broadcasting: 1\nI0126 13:29:55.808549 550 log.go:172] (0xc000a5c370) (0xc0006a21e0) Stream removed, broadcasting: 3\nI0126 13:29:55.808553 550 log.go:172] (0xc000a5c370) (0xc000922820) Stream removed, broadcasting: 5\n" Jan 26 13:29:55.818: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 26 13:29:55.818: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 26 13:30:05.863: INFO: Waiting for StatefulSet statefulset-4196/ss2 to complete update Jan 26 13:30:05.863: INFO: Waiting for Pod statefulset-4196/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 26 13:30:05.863: INFO: Waiting for Pod statefulset-4196/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 26 13:30:05.864: INFO: Waiting for Pod statefulset-4196/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 26 13:30:15.887: INFO: Waiting for StatefulSet statefulset-4196/ss2 to complete update Jan 26 13:30:15.887: INFO: Waiting for Pod statefulset-4196/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 26 13:30:15.887: INFO: Waiting for Pod statefulset-4196/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 26 13:30:25.885: INFO: Waiting for StatefulSet statefulset-4196/ss2 to complete update Jan 26 13:30:25.886: INFO: Waiting for Pod statefulset-4196/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 26 13:30:25.886: INFO: Waiting for Pod statefulset-4196/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 26 13:30:35.890: INFO: Waiting for StatefulSet statefulset-4196/ss2 to complete update Jan 26 13:30:35.890: INFO: Waiting for Pod statefulset-4196/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 26 13:30:45.883: INFO: Waiting for StatefulSet statefulset-4196/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 26 13:30:55.881: INFO: Deleting all statefulset in ns statefulset-4196 Jan 26 13:30:55.887: INFO: Scaling statefulset ss2 to 0 Jan 26 13:31:26.023: INFO: Waiting for statefulset status.replicas updated to 0 Jan 26 13:31:26.029: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:31:26.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4196" for this suite. Jan 26 13:31:34.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:31:34.212: INFO: namespace statefulset-4196 deletion completed in 8.146749326s • [SLOW TEST:223.127 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:31:34.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 26 13:31:34.278: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:31:48.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7933" for this suite. Jan 26 13:31:54.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:31:54.372: INFO: namespace init-container-7933 deletion completed in 6.204940462s • [SLOW TEST:20.159 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:31:54.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jan 26 13:32:02.562: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 26 13:32:12.715: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:32:12.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4479" for this suite. Jan 26 13:32:18.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:32:19.048: INFO: namespace pods-4479 deletion completed in 6.323231525s • [SLOW TEST:24.676 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:32:19.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:32:27.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8593" for this suite. Jan 26 13:32:33.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:32:33.493: INFO: namespace emptydir-wrapper-8593 deletion completed in 6.171933284s • [SLOW TEST:14.444 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:32:33.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jan 26 13:32:34.108: INFO: created pod pod-service-account-defaultsa Jan 26 13:32:34.108: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 26 13:32:34.226: INFO: created pod pod-service-account-mountsa Jan 26 13:32:34.226: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 26 13:32:34.260: INFO: created pod pod-service-account-nomountsa Jan 26 13:32:34.260: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 26 13:32:34.311: INFO: created pod pod-service-account-defaultsa-mountspec Jan 26 13:32:34.311: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 26 13:32:34.422: INFO: created pod pod-service-account-mountsa-mountspec Jan 26 13:32:34.423: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 26 13:32:34.430: INFO: created pod pod-service-account-nomountsa-mountspec Jan 26 13:32:34.430: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 26 13:32:34.451: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 26 13:32:34.451: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 26 13:32:35.602: INFO: created pod pod-service-account-mountsa-nomountspec Jan 26 13:32:35.602: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 26 13:32:35.823: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 26 13:32:35.823: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:32:35.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8602" for this suite. Jan 26 13:33:16.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:33:16.391: INFO: namespace svcaccounts-8602 deletion completed in 40.324949138s • [SLOW TEST:42.896 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:33:16.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 26 13:33:17.121: INFO: Pod name wrapped-volume-race-15dac338-73a7-41c9-883c-873c4b77e086: Found 0 pods out of 5 Jan 26 13:33:22.139: INFO: Pod name wrapped-volume-race-15dac338-73a7-41c9-883c-873c4b77e086: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-15dac338-73a7-41c9-883c-873c4b77e086 in namespace emptydir-wrapper-5296, will wait for the garbage collector to delete the pods Jan 26 13:33:52.287: INFO: Deleting ReplicationController wrapped-volume-race-15dac338-73a7-41c9-883c-873c4b77e086 took: 33.351005ms Jan 26 13:33:52.688: INFO: Terminating ReplicationController wrapped-volume-race-15dac338-73a7-41c9-883c-873c4b77e086 pods took: 400.60447ms STEP: Creating RC which spawns configmap-volume pods Jan 26 13:34:47.133: INFO: Pod name wrapped-volume-race-2a4b63a6-49ba-4b11-8ecb-faa73acd274c: Found 0 pods out of 5 Jan 26 13:34:52.145: INFO: Pod name wrapped-volume-race-2a4b63a6-49ba-4b11-8ecb-faa73acd274c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2a4b63a6-49ba-4b11-8ecb-faa73acd274c in namespace emptydir-wrapper-5296, will wait for the garbage collector to delete the pods Jan 26 13:35:20.288: INFO: Deleting ReplicationController wrapped-volume-race-2a4b63a6-49ba-4b11-8ecb-faa73acd274c took: 13.534913ms Jan 26 13:35:20.689: INFO: Terminating ReplicationController wrapped-volume-race-2a4b63a6-49ba-4b11-8ecb-faa73acd274c pods took: 400.491968ms STEP: Creating RC which spawns configmap-volume pods Jan 26 13:36:07.188: INFO: Pod name wrapped-volume-race-91715410-4e6e-43e1-b32b-a2f6fd6be98b: Found 0 pods out of 5 Jan 26 13:36:12.208: INFO: Pod name wrapped-volume-race-91715410-4e6e-43e1-b32b-a2f6fd6be98b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-91715410-4e6e-43e1-b32b-a2f6fd6be98b in namespace emptydir-wrapper-5296, will wait for the garbage collector to delete the pods Jan 26 13:36:42.339: INFO: Deleting ReplicationController wrapped-volume-race-91715410-4e6e-43e1-b32b-a2f6fd6be98b took: 17.009041ms Jan 26 13:36:42.740: INFO: Terminating ReplicationController wrapped-volume-race-91715410-4e6e-43e1-b32b-a2f6fd6be98b pods took: 401.363956ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:37:28.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5296" for this suite. Jan 26 13:37:38.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:37:38.446: INFO: namespace emptydir-wrapper-5296 deletion completed in 10.170374929s • [SLOW TEST:262.055 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:37:38.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:37:48.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-815" for this suite. Jan 26 13:38:32.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:38:32.982: INFO: namespace kubelet-test-815 deletion completed in 44.248586871s • [SLOW TEST:54.535 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:38:32.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-d1c0e551-0d85-4eeb-a194-6c2c206878b9 STEP: Creating a pod to test consume secrets Jan 26 13:38:33.123: INFO: Waiting up to 5m0s for pod "pod-secrets-216cd1c3-edea-40a7-943f-d615b46a4ab1" in namespace "secrets-6602" to be "success or failure" Jan 26 13:38:33.163: INFO: Pod "pod-secrets-216cd1c3-edea-40a7-943f-d615b46a4ab1": Phase="Pending", Reason="", readiness=false. Elapsed: 39.972466ms Jan 26 13:38:35.174: INFO: Pod "pod-secrets-216cd1c3-edea-40a7-943f-d615b46a4ab1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050693358s Jan 26 13:38:37.179: INFO: Pod "pod-secrets-216cd1c3-edea-40a7-943f-d615b46a4ab1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056077381s Jan 26 13:38:39.187: INFO: Pod "pod-secrets-216cd1c3-edea-40a7-943f-d615b46a4ab1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063637611s Jan 26 13:38:41.208: INFO: Pod "pod-secrets-216cd1c3-edea-40a7-943f-d615b46a4ab1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085277275s STEP: Saw pod success Jan 26 13:38:41.209: INFO: Pod "pod-secrets-216cd1c3-edea-40a7-943f-d615b46a4ab1" satisfied condition "success or failure" Jan 26 13:38:41.218: INFO: Trying to get logs from node iruya-node pod pod-secrets-216cd1c3-edea-40a7-943f-d615b46a4ab1 container secret-volume-test: STEP: delete the pod Jan 26 13:38:41.304: INFO: Waiting for pod pod-secrets-216cd1c3-edea-40a7-943f-d615b46a4ab1 to disappear Jan 26 13:38:41.312: INFO: Pod pod-secrets-216cd1c3-edea-40a7-943f-d615b46a4ab1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:38:41.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6602" for this suite. Jan 26 13:38:47.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:38:47.496: INFO: namespace secrets-6602 deletion completed in 6.17658325s • [SLOW TEST:14.513 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:38:47.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 26 13:38:47.569: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 26 13:38:47.668: INFO: Waiting for terminating namespaces to be deleted... Jan 26 13:38:47.675: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 26 13:38:47.693: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 26 13:38:47.693: INFO: Container kube-proxy ready: true, restart count 0 Jan 26 13:38:47.693: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 26 13:38:47.693: INFO: Container weave ready: true, restart count 0 Jan 26 13:38:47.693: INFO: Container weave-npc ready: true, restart count 0 Jan 26 13:38:47.693: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 26 13:38:47.708: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 26 13:38:47.708: INFO: Container etcd ready: true, restart count 0 Jan 26 13:38:47.708: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 26 13:38:47.708: INFO: Container weave ready: true, restart count 0 Jan 26 13:38:47.708: INFO: Container weave-npc ready: true, restart count 0 Jan 26 13:38:47.708: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 26 13:38:47.708: INFO: Container coredns ready: true, restart count 0 Jan 26 13:38:47.708: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 26 13:38:47.708: INFO: Container kube-controller-manager ready: true, restart count 19 Jan 26 13:38:47.708: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 26 13:38:47.708: INFO: Container kube-proxy ready: true, restart count 0 Jan 26 13:38:47.708: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 26 13:38:47.708: INFO: Container kube-apiserver ready: true, restart count 0 Jan 26 13:38:47.708: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 26 13:38:47.708: INFO: Container kube-scheduler ready: true, restart count 13 Jan 26 13:38:47.708: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 26 13:38:47.708: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Jan 26 13:38:47.833: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 26 13:38:47.833: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 26 13:38:47.833: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Jan 26 13:38:47.833: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Jan 26 13:38:47.833: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Jan 26 13:38:47.833: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Jan 26 13:38:47.833: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Jan 26 13:38:47.833: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Jan 26 13:38:47.833: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Jan 26 13:38:47.833: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-0386bf72-cdbd-4f36-8d92-7aa02e247526.15ed73666346622c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-841/filler-pod-0386bf72-cdbd-4f36-8d92-7aa02e247526 to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-0386bf72-cdbd-4f36-8d92-7aa02e247526.15ed736786368fcf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-0386bf72-cdbd-4f36-8d92-7aa02e247526.15ed736813feb1c7], Reason = [Created], Message = [Created container filler-pod-0386bf72-cdbd-4f36-8d92-7aa02e247526] STEP: Considering event: Type = [Normal], Name = [filler-pod-0386bf72-cdbd-4f36-8d92-7aa02e247526.15ed736838b78db8], Reason = [Started], Message = [Started container filler-pod-0386bf72-cdbd-4f36-8d92-7aa02e247526] STEP: Considering event: Type = [Normal], Name = [filler-pod-52425158-198b-402e-83ab-2ea3e9f897a6.15ed73666a37c609], Reason = [Scheduled], Message = [Successfully assigned sched-pred-841/filler-pod-52425158-198b-402e-83ab-2ea3e9f897a6 to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-52425158-198b-402e-83ab-2ea3e9f897a6.15ed7367844fcdcf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-52425158-198b-402e-83ab-2ea3e9f897a6.15ed73684320dc74], Reason = [Created], Message = [Created container filler-pod-52425158-198b-402e-83ab-2ea3e9f897a6] STEP: Considering event: Type = [Normal], Name = [filler-pod-52425158-198b-402e-83ab-2ea3e9f897a6.15ed73685ee6a48e], Reason = [Started], Message = [Started container filler-pod-52425158-198b-402e-83ab-2ea3e9f897a6] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ed7368be87ba51], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:38:59.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-841" for this suite. Jan 26 13:39:09.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:39:09.440: INFO: namespace sched-pred-841 deletion completed in 10.170829593s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:21.944 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:39:09.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 26 13:39:09.585: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 26 13:39:09.656: INFO: Number of nodes with available pods: 0 Jan 26 13:39:09.656: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:10.781: INFO: Number of nodes with available pods: 0 Jan 26 13:39:10.781: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:11.677: INFO: Number of nodes with available pods: 0 Jan 26 13:39:11.677: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:12.699: INFO: Number of nodes with available pods: 0 Jan 26 13:39:12.700: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:13.680: INFO: Number of nodes with available pods: 0 Jan 26 13:39:13.680: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:15.356: INFO: Number of nodes with available pods: 0 Jan 26 13:39:15.356: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:16.024: INFO: Number of nodes with available pods: 0 Jan 26 13:39:16.024: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:16.688: INFO: Number of nodes with available pods: 0 Jan 26 13:39:16.689: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:17.681: INFO: Number of nodes with available pods: 0 Jan 26 13:39:17.681: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:18.683: INFO: Number of nodes with available pods: 2 Jan 26 13:39:18.683: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 26 13:39:18.745: INFO: Wrong image for pod: daemon-set-c9278. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:18.745: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:19.774: INFO: Wrong image for pod: daemon-set-c9278. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:19.774: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:20.775: INFO: Wrong image for pod: daemon-set-c9278. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:20.776: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:21.773: INFO: Wrong image for pod: daemon-set-c9278. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:21.773: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:22.785: INFO: Wrong image for pod: daemon-set-c9278. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:22.785: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:23.775: INFO: Wrong image for pod: daemon-set-c9278. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:23.775: INFO: Pod daemon-set-c9278 is not available Jan 26 13:39:23.775: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:24.799: INFO: Pod daemon-set-66f8x is not available Jan 26 13:39:24.799: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:26.000: INFO: Pod daemon-set-66f8x is not available Jan 26 13:39:26.000: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:26.773: INFO: Pod daemon-set-66f8x is not available Jan 26 13:39:26.774: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:27.790: INFO: Pod daemon-set-66f8x is not available Jan 26 13:39:27.790: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:28.978: INFO: Pod daemon-set-66f8x is not available Jan 26 13:39:28.979: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:29.825: INFO: Pod daemon-set-66f8x is not available Jan 26 13:39:29.825: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:30.888: INFO: Pod daemon-set-66f8x is not available Jan 26 13:39:30.888: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:31.773: INFO: Pod daemon-set-66f8x is not available Jan 26 13:39:31.773: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:32.769: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:33.799: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:34.775: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:35.823: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:36.812: INFO: Wrong image for pod: daemon-set-smcz8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 26 13:39:36.812: INFO: Pod daemon-set-smcz8 is not available Jan 26 13:39:37.783: INFO: Pod daemon-set-84frc is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 26 13:39:37.812: INFO: Number of nodes with available pods: 1 Jan 26 13:39:37.812: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:38.886: INFO: Number of nodes with available pods: 1 Jan 26 13:39:38.886: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:39.848: INFO: Number of nodes with available pods: 1 Jan 26 13:39:39.848: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:40.856: INFO: Number of nodes with available pods: 1 Jan 26 13:39:40.856: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:41.832: INFO: Number of nodes with available pods: 1 Jan 26 13:39:41.832: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:42.846: INFO: Number of nodes with available pods: 1 Jan 26 13:39:42.846: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:43.899: INFO: Number of nodes with available pods: 1 Jan 26 13:39:43.899: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:39:44.832: INFO: Number of nodes with available pods: 2 Jan 26 13:39:44.832: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7821, will wait for the garbage collector to delete the pods Jan 26 13:39:45.119: INFO: Deleting DaemonSet.extensions daemon-set took: 11.828481ms Jan 26 13:39:45.419: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.415522ms Jan 26 13:39:57.933: INFO: Number of nodes with available pods: 0 Jan 26 13:39:57.933: INFO: Number of running nodes: 0, number of available pods: 0 Jan 26 13:39:57.938: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7821/daemonsets","resourceVersion":"21939318"},"items":null} Jan 26 13:39:57.941: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7821/pods","resourceVersion":"21939318"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:39:57.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7821" for this suite. Jan 26 13:40:04.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:40:04.144: INFO: namespace daemonsets-7821 deletion completed in 6.158339576s • [SLOW TEST:54.704 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:40:04.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 26 13:40:12.885: INFO: Successfully updated pod "labelsupdate7fc1e40d-51b2-4434-a0e6-d61c18a75f07" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:40:16.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3393" for this suite. Jan 26 13:40:39.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:40:39.167: INFO: namespace projected-3393 deletion completed in 22.167947618s • [SLOW TEST:35.022 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:40:39.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 26 13:40:49.423: INFO: Waiting up to 5m0s for pod "client-envvars-de09f110-4842-4aac-90ec-bd30461e3391" in namespace "pods-6257" to be "success or failure" Jan 26 13:40:49.578: INFO: Pod "client-envvars-de09f110-4842-4aac-90ec-bd30461e3391": Phase="Pending", Reason="", readiness=false. Elapsed: 154.069875ms Jan 26 13:40:51.636: INFO: Pod "client-envvars-de09f110-4842-4aac-90ec-bd30461e3391": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212478601s Jan 26 13:40:53.649: INFO: Pod "client-envvars-de09f110-4842-4aac-90ec-bd30461e3391": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225653589s Jan 26 13:40:55.659: INFO: Pod "client-envvars-de09f110-4842-4aac-90ec-bd30461e3391": Phase="Pending", Reason="", readiness=false. Elapsed: 6.235363286s Jan 26 13:40:57.675: INFO: Pod "client-envvars-de09f110-4842-4aac-90ec-bd30461e3391": Phase="Pending", Reason="", readiness=false. Elapsed: 8.251453992s Jan 26 13:40:59.684: INFO: Pod "client-envvars-de09f110-4842-4aac-90ec-bd30461e3391": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.260378789s STEP: Saw pod success Jan 26 13:40:59.684: INFO: Pod "client-envvars-de09f110-4842-4aac-90ec-bd30461e3391" satisfied condition "success or failure" Jan 26 13:40:59.691: INFO: Trying to get logs from node iruya-node pod client-envvars-de09f110-4842-4aac-90ec-bd30461e3391 container env3cont: STEP: delete the pod Jan 26 13:40:59.770: INFO: Waiting for pod client-envvars-de09f110-4842-4aac-90ec-bd30461e3391 to disappear Jan 26 13:40:59.783: INFO: Pod client-envvars-de09f110-4842-4aac-90ec-bd30461e3391 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:40:59.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6257" for this suite. Jan 26 13:41:51.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:41:52.089: INFO: namespace pods-6257 deletion completed in 52.251391116s • [SLOW TEST:72.922 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:41:52.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:41:59.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6488" for this suite. Jan 26 13:42:05.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:42:05.388: INFO: namespace namespaces-6488 deletion completed in 6.167357056s STEP: Destroying namespace "nsdeletetest-6719" for this suite. Jan 26 13:42:05.405: INFO: Namespace nsdeletetest-6719 was already deleted STEP: Destroying namespace "nsdeletetest-3142" for this suite. Jan 26 13:42:11.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:42:11.601: INFO: namespace nsdeletetest-3142 deletion completed in 6.196059996s • [SLOW TEST:19.511 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:42:11.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-3f29c608-8786-4089-b817-49222e8616bf STEP: Creating a pod to test consume secrets Jan 26 13:42:11.740: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c510cc44-ed1b-4112-b85a-d3606510a1fc" in namespace "projected-4283" to be "success or failure" Jan 26 13:42:11.749: INFO: Pod "pod-projected-secrets-c510cc44-ed1b-4112-b85a-d3606510a1fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.433623ms Jan 26 13:42:13.771: INFO: Pod "pod-projected-secrets-c510cc44-ed1b-4112-b85a-d3606510a1fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030507367s Jan 26 13:42:15.789: INFO: Pod "pod-projected-secrets-c510cc44-ed1b-4112-b85a-d3606510a1fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048578856s Jan 26 13:42:17.802: INFO: Pod "pod-projected-secrets-c510cc44-ed1b-4112-b85a-d3606510a1fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062315963s Jan 26 13:42:19.818: INFO: Pod "pod-projected-secrets-c510cc44-ed1b-4112-b85a-d3606510a1fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078051182s STEP: Saw pod success Jan 26 13:42:19.819: INFO: Pod "pod-projected-secrets-c510cc44-ed1b-4112-b85a-d3606510a1fc" satisfied condition "success or failure" Jan 26 13:42:19.832: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c510cc44-ed1b-4112-b85a-d3606510a1fc container projected-secret-volume-test: STEP: delete the pod Jan 26 13:42:19.982: INFO: Waiting for pod pod-projected-secrets-c510cc44-ed1b-4112-b85a-d3606510a1fc to disappear Jan 26 13:42:19.995: INFO: Pod pod-projected-secrets-c510cc44-ed1b-4112-b85a-d3606510a1fc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:42:19.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4283" for this suite. Jan 26 13:42:26.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:42:26.187: INFO: namespace projected-4283 deletion completed in 6.181769753s • [SLOW TEST:14.585 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:42:26.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 26 13:42:26.283: INFO: Waiting up to 5m0s for pod "downwardapi-volume-770a3656-1a4c-4ad9-bdde-81429f3f596e" in namespace "downward-api-9827" to be "success or failure" Jan 26 13:42:26.295: INFO: Pod "downwardapi-volume-770a3656-1a4c-4ad9-bdde-81429f3f596e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.817196ms Jan 26 13:42:28.305: INFO: Pod "downwardapi-volume-770a3656-1a4c-4ad9-bdde-81429f3f596e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021656916s Jan 26 13:42:30.321: INFO: Pod "downwardapi-volume-770a3656-1a4c-4ad9-bdde-81429f3f596e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03815722s Jan 26 13:42:32.343: INFO: Pod "downwardapi-volume-770a3656-1a4c-4ad9-bdde-81429f3f596e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059344058s Jan 26 13:42:34.349: INFO: Pod "downwardapi-volume-770a3656-1a4c-4ad9-bdde-81429f3f596e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065688131s STEP: Saw pod success Jan 26 13:42:34.349: INFO: Pod "downwardapi-volume-770a3656-1a4c-4ad9-bdde-81429f3f596e" satisfied condition "success or failure" Jan 26 13:42:34.351: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-770a3656-1a4c-4ad9-bdde-81429f3f596e container client-container: STEP: delete the pod Jan 26 13:42:34.410: INFO: Waiting for pod downwardapi-volume-770a3656-1a4c-4ad9-bdde-81429f3f596e to disappear Jan 26 13:42:34.478: INFO: Pod downwardapi-volume-770a3656-1a4c-4ad9-bdde-81429f3f596e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:42:34.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9827" for this suite. Jan 26 13:42:40.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:42:40.692: INFO: namespace downward-api-9827 deletion completed in 6.206489324s • [SLOW TEST:14.505 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:42:40.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-ea50d36d-f216-4d87-9b4a-584c3b9eebd0 STEP: Creating configMap with name cm-test-opt-upd-e90b6cb8-0cda-409d-96ce-964c02e77375 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ea50d36d-f216-4d87-9b4a-584c3b9eebd0 STEP: Updating configmap cm-test-opt-upd-e90b6cb8-0cda-409d-96ce-964c02e77375 STEP: Creating configMap with name cm-test-opt-create-2fadad38-336e-4986-926d-3d374140d224 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:44:21.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5411" for this suite. Jan 26 13:44:43.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:44:43.417: INFO: namespace projected-5411 deletion completed in 22.266589092s • [SLOW TEST:122.724 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:44:43.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 26 13:44:43.546: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12be900a-fd8d-49fc-8ac9-cbcb13bd003c" in namespace "downward-api-5336" to be "success or failure" Jan 26 13:44:43.555: INFO: Pod "downwardapi-volume-12be900a-fd8d-49fc-8ac9-cbcb13bd003c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.95671ms Jan 26 13:44:45.567: INFO: Pod "downwardapi-volume-12be900a-fd8d-49fc-8ac9-cbcb13bd003c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021466867s Jan 26 13:44:47.578: INFO: Pod "downwardapi-volume-12be900a-fd8d-49fc-8ac9-cbcb13bd003c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032111404s Jan 26 13:44:49.587: INFO: Pod "downwardapi-volume-12be900a-fd8d-49fc-8ac9-cbcb13bd003c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041168537s Jan 26 13:44:51.631: INFO: Pod "downwardapi-volume-12be900a-fd8d-49fc-8ac9-cbcb13bd003c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085644582s STEP: Saw pod success Jan 26 13:44:51.631: INFO: Pod "downwardapi-volume-12be900a-fd8d-49fc-8ac9-cbcb13bd003c" satisfied condition "success or failure" Jan 26 13:44:51.641: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-12be900a-fd8d-49fc-8ac9-cbcb13bd003c container client-container: STEP: delete the pod Jan 26 13:44:51.802: INFO: Waiting for pod downwardapi-volume-12be900a-fd8d-49fc-8ac9-cbcb13bd003c to disappear Jan 26 13:44:51.814: INFO: Pod downwardapi-volume-12be900a-fd8d-49fc-8ac9-cbcb13bd003c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:44:51.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5336" for this suite. Jan 26 13:44:57.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:44:57.975: INFO: namespace downward-api-5336 deletion completed in 6.150008555s • [SLOW TEST:14.557 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:44:57.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 26 13:44:58.074: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:45:06.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-559" for this suite. Jan 26 13:45:48.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:45:48.810: INFO: namespace pods-559 deletion completed in 42.212160661s • [SLOW TEST:50.833 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:45:48.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:45:54.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3983" for this suite. Jan 26 13:46:00.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:46:00.617: INFO: namespace watch-3983 deletion completed in 6.284148566s • [SLOW TEST:11.806 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:46:00.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:46:00.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4538" for this suite. Jan 26 13:46:22.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:46:23.128: INFO: namespace pods-4538 deletion completed in 22.255981197s • [SLOW TEST:22.510 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:46:23.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ea35e8ed-692b-4ac4-a770-11e40f06a3fb STEP: Creating a pod to test consume secrets Jan 26 13:46:24.368: INFO: Waiting up to 5m0s for pod "pod-secrets-ce97f8ee-f982-4e2d-9e84-dffb7582a02b" in namespace "secrets-3628" to be "success or failure" Jan 26 13:46:24.502: INFO: Pod "pod-secrets-ce97f8ee-f982-4e2d-9e84-dffb7582a02b": Phase="Pending", Reason="", readiness=false. Elapsed: 133.868952ms Jan 26 13:46:26.520: INFO: Pod "pod-secrets-ce97f8ee-f982-4e2d-9e84-dffb7582a02b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152188099s Jan 26 13:46:28.537: INFO: Pod "pod-secrets-ce97f8ee-f982-4e2d-9e84-dffb7582a02b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169510421s Jan 26 13:46:30.553: INFO: Pod "pod-secrets-ce97f8ee-f982-4e2d-9e84-dffb7582a02b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184676512s Jan 26 13:46:32.571: INFO: Pod "pod-secrets-ce97f8ee-f982-4e2d-9e84-dffb7582a02b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.203410289s Jan 26 13:46:34.649: INFO: Pod "pod-secrets-ce97f8ee-f982-4e2d-9e84-dffb7582a02b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.281167903s STEP: Saw pod success Jan 26 13:46:34.649: INFO: Pod "pod-secrets-ce97f8ee-f982-4e2d-9e84-dffb7582a02b" satisfied condition "success or failure" Jan 26 13:46:34.655: INFO: Trying to get logs from node iruya-node pod pod-secrets-ce97f8ee-f982-4e2d-9e84-dffb7582a02b container secret-volume-test: STEP: delete the pod Jan 26 13:46:34.901: INFO: Waiting for pod pod-secrets-ce97f8ee-f982-4e2d-9e84-dffb7582a02b to disappear Jan 26 13:46:34.914: INFO: Pod pod-secrets-ce97f8ee-f982-4e2d-9e84-dffb7582a02b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:46:34.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3628" for this suite. Jan 26 13:46:40.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:46:41.145: INFO: namespace secrets-3628 deletion completed in 6.220469345s • [SLOW TEST:18.016 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:46:41.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-5792 I0126 13:46:41.287661 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5792, replica count: 1 I0126 13:46:42.338817 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 13:46:43.339185 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 13:46:44.339546 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 13:46:45.340014 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 13:46:46.340834 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 13:46:47.341857 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 13:46:48.342284 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 13:46:49.342837 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0126 13:46:50.343539 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 26 13:46:50.508: INFO: Created: latency-svc-gh5f6 Jan 26 13:46:50.578: INFO: Got endpoints: latency-svc-gh5f6 [133.951021ms] Jan 26 13:46:50.670: INFO: Created: latency-svc-gbxwv Jan 26 13:46:50.751: INFO: Got endpoints: latency-svc-gbxwv [172.317183ms] Jan 26 13:46:50.770: INFO: Created: latency-svc-mmv7l Jan 26 13:46:50.781: INFO: Got endpoints: latency-svc-mmv7l [202.57525ms] Jan 26 13:46:50.805: INFO: Created: latency-svc-5x5j8 Jan 26 13:46:50.816: INFO: Got endpoints: latency-svc-5x5j8 [235.862007ms] Jan 26 13:46:50.963: INFO: Created: latency-svc-xs8s7 Jan 26 13:46:50.968: INFO: Got endpoints: latency-svc-xs8s7 [387.386155ms] Jan 26 13:46:51.029: INFO: Created: latency-svc-wwpnv Jan 26 13:46:51.052: INFO: Got endpoints: latency-svc-wwpnv [471.757506ms] Jan 26 13:46:51.128: INFO: Created: latency-svc-wf22p Jan 26 13:46:51.137: INFO: Got endpoints: latency-svc-wf22p [556.512195ms] Jan 26 13:46:51.174: INFO: Created: latency-svc-xdfzz Jan 26 13:46:51.177: INFO: Got endpoints: latency-svc-xdfzz [596.350751ms] Jan 26 13:46:51.263: INFO: Created: latency-svc-vdnp5 Jan 26 13:46:51.307: INFO: Got endpoints: latency-svc-vdnp5 [726.191106ms] Jan 26 13:46:51.312: INFO: Created: latency-svc-rxb2p Jan 26 13:46:51.317: INFO: Got endpoints: latency-svc-rxb2p [735.779707ms] Jan 26 13:46:51.355: INFO: Created: latency-svc-dl9hj Jan 26 13:46:51.361: INFO: Got endpoints: latency-svc-dl9hj [780.103294ms] Jan 26 13:46:51.460: INFO: Created: latency-svc-smv6b Jan 26 13:46:51.472: INFO: Got endpoints: latency-svc-smv6b [890.800313ms] Jan 26 13:46:51.505: INFO: Created: latency-svc-pv8zs Jan 26 13:46:51.519: INFO: Got endpoints: latency-svc-pv8zs [939.834546ms] Jan 26 13:46:51.637: INFO: Created: latency-svc-7zvv7 Jan 26 13:46:51.669: INFO: Got endpoints: latency-svc-7zvv7 [1.089480857s] Jan 26 13:46:51.697: INFO: Created: latency-svc-pgcss Jan 26 13:46:51.705: INFO: Got endpoints: latency-svc-pgcss [1.124653245s] Jan 26 13:46:51.866: INFO: Created: latency-svc-zrn8f Jan 26 13:46:51.889: INFO: Got endpoints: latency-svc-zrn8f [1.307966621s] Jan 26 13:46:51.941: INFO: Created: latency-svc-q7jgl Jan 26 13:46:51.945: INFO: Got endpoints: latency-svc-q7jgl [1.193761131s] Jan 26 13:46:52.076: INFO: Created: latency-svc-q548g Jan 26 13:46:52.085: INFO: Got endpoints: latency-svc-q548g [1.303431572s] Jan 26 13:46:52.275: INFO: Created: latency-svc-kx8q7 Jan 26 13:46:52.278: INFO: Got endpoints: latency-svc-kx8q7 [1.460989857s] Jan 26 13:46:52.354: INFO: Created: latency-svc-9ckl5 Jan 26 13:46:52.361: INFO: Got endpoints: latency-svc-9ckl5 [1.39283837s] Jan 26 13:46:52.477: INFO: Created: latency-svc-xzcn6 Jan 26 13:46:52.488: INFO: Got endpoints: latency-svc-xzcn6 [1.435823601s] Jan 26 13:46:52.534: INFO: Created: latency-svc-7g4fh Jan 26 13:46:52.557: INFO: Got endpoints: latency-svc-7g4fh [1.420051136s] Jan 26 13:46:52.680: INFO: Created: latency-svc-gc65g Jan 26 13:46:52.685: INFO: Got endpoints: latency-svc-gc65g [1.507394803s] Jan 26 13:46:53.014: INFO: Created: latency-svc-gmqx4 Jan 26 13:46:53.025: INFO: Got endpoints: latency-svc-gmqx4 [1.717766575s] Jan 26 13:46:53.075: INFO: Created: latency-svc-fx8hp Jan 26 13:46:53.091: INFO: Got endpoints: latency-svc-fx8hp [1.774523772s] Jan 26 13:46:53.199: INFO: Created: latency-svc-vvkzp Jan 26 13:46:53.206: INFO: Got endpoints: latency-svc-vvkzp [1.844886363s] Jan 26 13:46:53.254: INFO: Created: latency-svc-5b6d4 Jan 26 13:46:53.264: INFO: Got endpoints: latency-svc-5b6d4 [1.792180709s] Jan 26 13:46:53.375: INFO: Created: latency-svc-r4c22 Jan 26 13:46:53.375: INFO: Got endpoints: latency-svc-r4c22 [1.855697647s] Jan 26 13:46:53.403: INFO: Created: latency-svc-6zzbx Jan 26 13:46:53.411: INFO: Got endpoints: latency-svc-6zzbx [1.741950834s] Jan 26 13:46:53.495: INFO: Created: latency-svc-vc4qj Jan 26 13:46:53.501: INFO: Got endpoints: latency-svc-vc4qj [1.795672744s] Jan 26 13:46:53.557: INFO: Created: latency-svc-tjqq5 Jan 26 13:46:53.569: INFO: Got endpoints: latency-svc-tjqq5 [1.679722486s] Jan 26 13:46:53.686: INFO: Created: latency-svc-rlwnc Jan 26 13:46:53.691: INFO: Got endpoints: latency-svc-rlwnc [1.74563146s] Jan 26 13:46:53.743: INFO: Created: latency-svc-rphrf Jan 26 13:46:53.748: INFO: Got endpoints: latency-svc-rphrf [1.662991111s] Jan 26 13:46:53.857: INFO: Created: latency-svc-wfvfk Jan 26 13:46:53.879: INFO: Got endpoints: latency-svc-wfvfk [1.600950043s] Jan 26 13:46:53.936: INFO: Created: latency-svc-kcdws Jan 26 13:46:54.030: INFO: Created: latency-svc-tlzlx Jan 26 13:46:54.030: INFO: Got endpoints: latency-svc-kcdws [1.66929849s] Jan 26 13:46:54.041: INFO: Got endpoints: latency-svc-tlzlx [1.552778678s] Jan 26 13:46:54.088: INFO: Created: latency-svc-8jlzn Jan 26 13:46:54.096: INFO: Got endpoints: latency-svc-8jlzn [1.537782776s] Jan 26 13:46:54.194: INFO: Created: latency-svc-64jk5 Jan 26 13:46:54.209: INFO: Got endpoints: latency-svc-64jk5 [1.523800081s] Jan 26 13:46:54.276: INFO: Created: latency-svc-wsst7 Jan 26 13:46:54.285: INFO: Got endpoints: latency-svc-wsst7 [1.259874407s] Jan 26 13:46:54.371: INFO: Created: latency-svc-98gkq Jan 26 13:46:54.381: INFO: Got endpoints: latency-svc-98gkq [1.289668069s] Jan 26 13:46:54.419: INFO: Created: latency-svc-ld579 Jan 26 13:46:54.427: INFO: Got endpoints: latency-svc-ld579 [1.221140958s] Jan 26 13:46:54.511: INFO: Created: latency-svc-788dh Jan 26 13:46:54.523: INFO: Got endpoints: latency-svc-788dh [141.168288ms] Jan 26 13:46:54.586: INFO: Created: latency-svc-8c8nl Jan 26 13:46:54.669: INFO: Created: latency-svc-9b56w Jan 26 13:46:54.669: INFO: Got endpoints: latency-svc-8c8nl [1.405268645s] Jan 26 13:46:54.677: INFO: Got endpoints: latency-svc-9b56w [1.301966698s] Jan 26 13:46:54.730: INFO: Created: latency-svc-2jdmq Jan 26 13:46:55.322: INFO: Got endpoints: latency-svc-2jdmq [1.911198904s] Jan 26 13:46:55.337: INFO: Created: latency-svc-bj4qm Jan 26 13:46:55.353: INFO: Got endpoints: latency-svc-bj4qm [1.852275586s] Jan 26 13:46:55.419: INFO: Created: latency-svc-f7mv6 Jan 26 13:46:55.468: INFO: Got endpoints: latency-svc-f7mv6 [1.898749212s] Jan 26 13:46:55.494: INFO: Created: latency-svc-z7sv5 Jan 26 13:46:55.505: INFO: Got endpoints: latency-svc-z7sv5 [1.81405099s] Jan 26 13:46:55.548: INFO: Created: latency-svc-bfgmr Jan 26 13:46:55.627: INFO: Got endpoints: latency-svc-bfgmr [1.878242684s] Jan 26 13:46:55.657: INFO: Created: latency-svc-psbgj Jan 26 13:46:55.668: INFO: Got endpoints: latency-svc-psbgj [1.788662692s] Jan 26 13:46:55.713: INFO: Created: latency-svc-vwl27 Jan 26 13:46:55.724: INFO: Got endpoints: latency-svc-vwl27 [1.693600556s] Jan 26 13:46:55.838: INFO: Created: latency-svc-pppqr Jan 26 13:46:55.845: INFO: Got endpoints: latency-svc-pppqr [1.803103213s] Jan 26 13:46:55.892: INFO: Created: latency-svc-4snd7 Jan 26 13:46:55.904: INFO: Got endpoints: latency-svc-4snd7 [1.807610386s] Jan 26 13:46:56.050: INFO: Created: latency-svc-468f5 Jan 26 13:46:56.067: INFO: Got endpoints: latency-svc-468f5 [1.857324352s] Jan 26 13:46:56.141: INFO: Created: latency-svc-hmx9f Jan 26 13:46:56.223: INFO: Created: latency-svc-4smk4 Jan 26 13:46:56.223: INFO: Got endpoints: latency-svc-hmx9f [1.938002308s] Jan 26 13:46:56.243: INFO: Got endpoints: latency-svc-4smk4 [1.815346358s] Jan 26 13:46:56.330: INFO: Created: latency-svc-gwp8g Jan 26 13:46:56.409: INFO: Got endpoints: latency-svc-gwp8g [1.885616713s] Jan 26 13:46:56.431: INFO: Created: latency-svc-bq2mz Jan 26 13:46:56.447: INFO: Got endpoints: latency-svc-bq2mz [1.777498186s] Jan 26 13:46:56.478: INFO: Created: latency-svc-vgctj Jan 26 13:46:56.490: INFO: Got endpoints: latency-svc-vgctj [1.813186768s] Jan 26 13:46:56.578: INFO: Created: latency-svc-wzmql Jan 26 13:46:56.626: INFO: Got endpoints: latency-svc-wzmql [1.303121153s] Jan 26 13:46:56.627: INFO: Created: latency-svc-j42ns Jan 26 13:46:56.643: INFO: Got endpoints: latency-svc-j42ns [1.289397634s] Jan 26 13:46:56.751: INFO: Created: latency-svc-h5k46 Jan 26 13:46:56.764: INFO: Got endpoints: latency-svc-h5k46 [1.295096185s] Jan 26 13:46:56.814: INFO: Created: latency-svc-496qz Jan 26 13:46:56.819: INFO: Got endpoints: latency-svc-496qz [1.313487414s] Jan 26 13:46:56.906: INFO: Created: latency-svc-8mhbn Jan 26 13:46:56.954: INFO: Got endpoints: latency-svc-8mhbn [1.3269174s] Jan 26 13:46:56.957: INFO: Created: latency-svc-8mqx2 Jan 26 13:46:56.981: INFO: Got endpoints: latency-svc-8mqx2 [1.31278697s] Jan 26 13:46:57.094: INFO: Created: latency-svc-hp55q Jan 26 13:46:57.164: INFO: Got endpoints: latency-svc-hp55q [1.440052243s] Jan 26 13:46:57.168: INFO: Created: latency-svc-5b6xh Jan 26 13:46:57.180: INFO: Got endpoints: latency-svc-5b6xh [1.335420002s] Jan 26 13:46:57.257: INFO: Created: latency-svc-fmf78 Jan 26 13:46:57.266: INFO: Got endpoints: latency-svc-fmf78 [1.361802996s] Jan 26 13:46:57.321: INFO: Created: latency-svc-tpl8f Jan 26 13:46:57.324: INFO: Got endpoints: latency-svc-tpl8f [1.257421012s] Jan 26 13:46:57.438: INFO: Created: latency-svc-qsdds Jan 26 13:46:57.447: INFO: Got endpoints: latency-svc-qsdds [1.224011123s] Jan 26 13:46:57.491: INFO: Created: latency-svc-bxfpw Jan 26 13:46:57.502: INFO: Got endpoints: latency-svc-bxfpw [1.258878631s] Jan 26 13:46:57.578: INFO: Created: latency-svc-2r47j Jan 26 13:46:57.589: INFO: Got endpoints: latency-svc-2r47j [1.179206913s] Jan 26 13:46:57.647: INFO: Created: latency-svc-ttgzr Jan 26 13:46:57.665: INFO: Got endpoints: latency-svc-ttgzr [1.217542136s] Jan 26 13:46:57.793: INFO: Created: latency-svc-prj97 Jan 26 13:46:57.812: INFO: Got endpoints: latency-svc-prj97 [1.320738657s] Jan 26 13:46:57.936: INFO: Created: latency-svc-xk8c4 Jan 26 13:46:58.017: INFO: Got endpoints: latency-svc-xk8c4 [1.391372395s] Jan 26 13:46:58.022: INFO: Created: latency-svc-s2wcm Jan 26 13:46:58.114: INFO: Got endpoints: latency-svc-s2wcm [1.471007296s] Jan 26 13:46:58.126: INFO: Created: latency-svc-nf4bs Jan 26 13:46:58.129: INFO: Got endpoints: latency-svc-nf4bs [1.365222145s] Jan 26 13:46:58.185: INFO: Created: latency-svc-68pfd Jan 26 13:46:58.185: INFO: Got endpoints: latency-svc-68pfd [1.365760011s] Jan 26 13:46:58.262: INFO: Created: latency-svc-ztr9g Jan 26 13:46:58.271: INFO: Got endpoints: latency-svc-ztr9g [1.316343658s] Jan 26 13:46:58.317: INFO: Created: latency-svc-l8ldd Jan 26 13:46:58.468: INFO: Got endpoints: latency-svc-l8ldd [1.487242594s] Jan 26 13:46:58.469: INFO: Created: latency-svc-rgq4q Jan 26 13:46:58.481: INFO: Got endpoints: latency-svc-rgq4q [1.316404616s] Jan 26 13:46:58.519: INFO: Created: latency-svc-l9hlp Jan 26 13:46:58.524: INFO: Got endpoints: latency-svc-l9hlp [1.343216041s] Jan 26 13:46:58.622: INFO: Created: latency-svc-n88v5 Jan 26 13:46:58.638: INFO: Got endpoints: latency-svc-n88v5 [1.371496829s] Jan 26 13:46:58.850: INFO: Created: latency-svc-jcwhr Jan 26 13:46:58.877: INFO: Got endpoints: latency-svc-jcwhr [1.552498953s] Jan 26 13:46:58.933: INFO: Created: latency-svc-k5trj Jan 26 13:46:59.034: INFO: Got endpoints: latency-svc-k5trj [1.586980354s] Jan 26 13:46:59.085: INFO: Created: latency-svc-n8fm2 Jan 26 13:46:59.090: INFO: Got endpoints: latency-svc-n8fm2 [1.587994804s] Jan 26 13:46:59.241: INFO: Created: latency-svc-958m6 Jan 26 13:46:59.251: INFO: Got endpoints: latency-svc-958m6 [1.662429444s] Jan 26 13:46:59.297: INFO: Created: latency-svc-hfqdd Jan 26 13:46:59.304: INFO: Got endpoints: latency-svc-hfqdd [1.636065825s] Jan 26 13:46:59.421: INFO: Created: latency-svc-lhlx4 Jan 26 13:46:59.440: INFO: Got endpoints: latency-svc-lhlx4 [1.627652224s] Jan 26 13:46:59.684: INFO: Created: latency-svc-6v6vs Jan 26 13:46:59.690: INFO: Got endpoints: latency-svc-6v6vs [1.672798643s] Jan 26 13:46:59.753: INFO: Created: latency-svc-hhxnd Jan 26 13:46:59.766: INFO: Got endpoints: latency-svc-hhxnd [1.651705235s] Jan 26 13:46:59.969: INFO: Created: latency-svc-hlzmd Jan 26 13:46:59.984: INFO: Got endpoints: latency-svc-hlzmd [1.85467755s] Jan 26 13:47:00.229: INFO: Created: latency-svc-z469w Jan 26 13:47:00.298: INFO: Created: latency-svc-bxc8m Jan 26 13:47:00.298: INFO: Got endpoints: latency-svc-z469w [2.113213728s] Jan 26 13:47:00.316: INFO: Got endpoints: latency-svc-bxc8m [2.044922374s] Jan 26 13:47:00.486: INFO: Created: latency-svc-nhglg Jan 26 13:47:00.501: INFO: Got endpoints: latency-svc-nhglg [2.031667066s] Jan 26 13:47:00.801: INFO: Created: latency-svc-c54pw Jan 26 13:47:00.826: INFO: Got endpoints: latency-svc-c54pw [2.34527329s] Jan 26 13:47:01.030: INFO: Created: latency-svc-4bjbg Jan 26 13:47:01.033: INFO: Got endpoints: latency-svc-4bjbg [2.508717463s] Jan 26 13:47:01.098: INFO: Created: latency-svc-bxf5p Jan 26 13:47:01.227: INFO: Got endpoints: latency-svc-bxf5p [2.588940167s] Jan 26 13:47:01.236: INFO: Created: latency-svc-7wdxp Jan 26 13:47:01.247: INFO: Got endpoints: latency-svc-7wdxp [2.369629723s] Jan 26 13:47:01.283: INFO: Created: latency-svc-lkd8w Jan 26 13:47:01.317: INFO: Got endpoints: latency-svc-lkd8w [2.28209252s] Jan 26 13:47:01.417: INFO: Created: latency-svc-mlk4n Jan 26 13:47:01.461: INFO: Got endpoints: latency-svc-mlk4n [2.37095811s] Jan 26 13:47:01.654: INFO: Created: latency-svc-8g9hn Jan 26 13:47:01.654: INFO: Got endpoints: latency-svc-8g9hn [2.402534295s] Jan 26 13:47:01.869: INFO: Created: latency-svc-6plw7 Jan 26 13:47:01.946: INFO: Got endpoints: latency-svc-6plw7 [2.641808106s] Jan 26 13:47:01.951: INFO: Created: latency-svc-6qlkf Jan 26 13:47:01.986: INFO: Got endpoints: latency-svc-6qlkf [2.545512257s] Jan 26 13:47:02.028: INFO: Created: latency-svc-6wp94 Jan 26 13:47:02.066: INFO: Got endpoints: latency-svc-6wp94 [2.375344575s] Jan 26 13:47:02.071: INFO: Created: latency-svc-sfmbc Jan 26 13:47:02.216: INFO: Got endpoints: latency-svc-sfmbc [2.449853685s] Jan 26 13:47:02.230: INFO: Created: latency-svc-lcdcz Jan 26 13:47:02.239: INFO: Got endpoints: latency-svc-lcdcz [2.254184971s] Jan 26 13:47:02.304: INFO: Created: latency-svc-d579j Jan 26 13:47:02.314: INFO: Got endpoints: latency-svc-d579j [2.015382831s] Jan 26 13:47:02.450: INFO: Created: latency-svc-85j7l Jan 26 13:47:02.469: INFO: Got endpoints: latency-svc-85j7l [2.153181699s] Jan 26 13:47:02.512: INFO: Created: latency-svc-6lfbb Jan 26 13:47:02.620: INFO: Got endpoints: latency-svc-6lfbb [2.1193036s] Jan 26 13:47:02.637: INFO: Created: latency-svc-fgjwt Jan 26 13:47:02.637: INFO: Got endpoints: latency-svc-fgjwt [1.810096524s] Jan 26 13:47:02.682: INFO: Created: latency-svc-v48wh Jan 26 13:47:02.690: INFO: Got endpoints: latency-svc-v48wh [1.656928791s] Jan 26 13:47:02.792: INFO: Created: latency-svc-vcmdj Jan 26 13:47:02.836: INFO: Got endpoints: latency-svc-vcmdj [1.609199542s] Jan 26 13:47:02.859: INFO: Created: latency-svc-tsksq Jan 26 13:47:02.865: INFO: Got endpoints: latency-svc-tsksq [1.618275072s] Jan 26 13:47:02.982: INFO: Created: latency-svc-774mk Jan 26 13:47:02.998: INFO: Got endpoints: latency-svc-774mk [1.681042168s] Jan 26 13:47:03.067: INFO: Created: latency-svc-8z98h Jan 26 13:47:03.072: INFO: Got endpoints: latency-svc-8z98h [1.61044062s] Jan 26 13:47:03.194: INFO: Created: latency-svc-bgz9p Jan 26 13:47:03.197: INFO: Got endpoints: latency-svc-bgz9p [1.543208124s] Jan 26 13:47:03.422: INFO: Created: latency-svc-bcjz9 Jan 26 13:47:03.425: INFO: Got endpoints: latency-svc-bcjz9 [1.478816717s] Jan 26 13:47:03.487: INFO: Created: latency-svc-rwfss Jan 26 13:47:03.487: INFO: Got endpoints: latency-svc-rwfss [1.500725908s] Jan 26 13:47:03.591: INFO: Created: latency-svc-vc87j Jan 26 13:47:03.601: INFO: Got endpoints: latency-svc-vc87j [1.53410647s] Jan 26 13:47:03.651: INFO: Created: latency-svc-nvbms Jan 26 13:47:03.664: INFO: Got endpoints: latency-svc-nvbms [1.447821468s] Jan 26 13:47:03.778: INFO: Created: latency-svc-5xbjv Jan 26 13:47:03.790: INFO: Got endpoints: latency-svc-5xbjv [1.550684108s] Jan 26 13:47:03.858: INFO: Created: latency-svc-jdspg Jan 26 13:47:03.955: INFO: Got endpoints: latency-svc-jdspg [1.640831936s] Jan 26 13:47:03.967: INFO: Created: latency-svc-txqvj Jan 26 13:47:03.989: INFO: Got endpoints: latency-svc-txqvj [1.51985622s] Jan 26 13:47:04.116: INFO: Created: latency-svc-8clq2 Jan 26 13:47:04.120: INFO: Got endpoints: latency-svc-8clq2 [1.499046493s] Jan 26 13:47:04.176: INFO: Created: latency-svc-q7b5g Jan 26 13:47:04.176: INFO: Got endpoints: latency-svc-q7b5g [1.539036624s] Jan 26 13:47:04.280: INFO: Created: latency-svc-7nz5w Jan 26 13:47:04.324: INFO: Created: latency-svc-wzh9x Jan 26 13:47:04.324: INFO: Got endpoints: latency-svc-7nz5w [1.633679093s] Jan 26 13:47:04.361: INFO: Got endpoints: latency-svc-wzh9x [1.523604089s] Jan 26 13:47:04.365: INFO: Created: latency-svc-ll5h4 Jan 26 13:47:04.372: INFO: Got endpoints: latency-svc-ll5h4 [1.506597149s] Jan 26 13:47:04.470: INFO: Created: latency-svc-gz5tv Jan 26 13:47:04.525: INFO: Got endpoints: latency-svc-gz5tv [1.52611841s] Jan 26 13:47:04.525: INFO: Created: latency-svc-g6q6j Jan 26 13:47:04.532: INFO: Got endpoints: latency-svc-g6q6j [1.45941158s] Jan 26 13:47:04.654: INFO: Created: latency-svc-rbsnv Jan 26 13:47:04.661: INFO: Got endpoints: latency-svc-rbsnv [1.462977639s] Jan 26 13:47:04.701: INFO: Created: latency-svc-w2gfz Jan 26 13:47:04.712: INFO: Got endpoints: latency-svc-w2gfz [1.286517206s] Jan 26 13:47:04.745: INFO: Created: latency-svc-bjwx9 Jan 26 13:47:04.750: INFO: Got endpoints: latency-svc-bjwx9 [1.262948906s] Jan 26 13:47:04.854: INFO: Created: latency-svc-pnk7v Jan 26 13:47:04.892: INFO: Created: latency-svc-6vwb8 Jan 26 13:47:04.892: INFO: Got endpoints: latency-svc-pnk7v [1.29114646s] Jan 26 13:47:04.902: INFO: Got endpoints: latency-svc-6vwb8 [1.237052307s] Jan 26 13:47:04.966: INFO: Created: latency-svc-lx4cb Jan 26 13:47:05.047: INFO: Got endpoints: latency-svc-lx4cb [1.256337963s] Jan 26 13:47:05.059: INFO: Created: latency-svc-rx898 Jan 26 13:47:05.075: INFO: Got endpoints: latency-svc-rx898 [1.120417453s] Jan 26 13:47:05.154: INFO: Created: latency-svc-9t7lg Jan 26 13:47:05.289: INFO: Got endpoints: latency-svc-9t7lg [1.299454845s] Jan 26 13:47:05.302: INFO: Created: latency-svc-ckxtp Jan 26 13:47:05.308: INFO: Got endpoints: latency-svc-ckxtp [1.188627464s] Jan 26 13:47:05.375: INFO: Created: latency-svc-prrjs Jan 26 13:47:05.382: INFO: Got endpoints: latency-svc-prrjs [1.205199721s] Jan 26 13:47:05.884: INFO: Created: latency-svc-jjcvk Jan 26 13:47:05.909: INFO: Got endpoints: latency-svc-jjcvk [1.584699454s] Jan 26 13:47:06.039: INFO: Created: latency-svc-qwzqf Jan 26 13:47:06.046: INFO: Got endpoints: latency-svc-qwzqf [1.68536948s] Jan 26 13:47:06.093: INFO: Created: latency-svc-kdw7z Jan 26 13:47:06.095: INFO: Got endpoints: latency-svc-kdw7z [1.723071051s] Jan 26 13:47:06.226: INFO: Created: latency-svc-cmtkf Jan 26 13:47:06.234: INFO: Got endpoints: latency-svc-cmtkf [1.708765035s] Jan 26 13:47:06.274: INFO: Created: latency-svc-29p5w Jan 26 13:47:06.297: INFO: Got endpoints: latency-svc-29p5w [1.764912681s] Jan 26 13:47:06.336: INFO: Created: latency-svc-l6bmt Jan 26 13:47:06.384: INFO: Got endpoints: latency-svc-l6bmt [1.723345682s] Jan 26 13:47:06.411: INFO: Created: latency-svc-27zvh Jan 26 13:47:06.414: INFO: Got endpoints: latency-svc-27zvh [1.701340824s] Jan 26 13:47:06.452: INFO: Created: latency-svc-c5c94 Jan 26 13:47:06.467: INFO: Got endpoints: latency-svc-c5c94 [1.716703609s] Jan 26 13:47:06.543: INFO: Created: latency-svc-s9fjf Jan 26 13:47:06.559: INFO: Got endpoints: latency-svc-s9fjf [1.666663645s] Jan 26 13:47:06.613: INFO: Created: latency-svc-lxx9h Jan 26 13:47:06.622: INFO: Got endpoints: latency-svc-lxx9h [1.720356377s] Jan 26 13:47:06.739: INFO: Created: latency-svc-4wg4j Jan 26 13:47:06.743: INFO: Got endpoints: latency-svc-4wg4j [1.695087009s] Jan 26 13:47:06.777: INFO: Created: latency-svc-p7ctb Jan 26 13:47:06.789: INFO: Got endpoints: latency-svc-p7ctb [1.712891471s] Jan 26 13:47:06.824: INFO: Created: latency-svc-kj9x2 Jan 26 13:47:06.929: INFO: Got endpoints: latency-svc-kj9x2 [1.639734107s] Jan 26 13:47:06.930: INFO: Created: latency-svc-zgbc4 Jan 26 13:47:06.934: INFO: Got endpoints: latency-svc-zgbc4 [1.625272493s] Jan 26 13:47:06.969: INFO: Created: latency-svc-8q7k5 Jan 26 13:47:06.971: INFO: Got endpoints: latency-svc-8q7k5 [1.589238685s] Jan 26 13:47:07.129: INFO: Created: latency-svc-k26q5 Jan 26 13:47:07.137: INFO: Got endpoints: latency-svc-k26q5 [1.228496883s] Jan 26 13:47:07.203: INFO: Created: latency-svc-cmc7p Jan 26 13:47:07.213: INFO: Got endpoints: latency-svc-cmc7p [1.166017133s] Jan 26 13:47:07.384: INFO: Created: latency-svc-g94lx Jan 26 13:47:07.399: INFO: Got endpoints: latency-svc-g94lx [1.303276311s] Jan 26 13:47:07.450: INFO: Created: latency-svc-zphtk Jan 26 13:47:07.470: INFO: Got endpoints: latency-svc-zphtk [1.236287323s] Jan 26 13:47:07.611: INFO: Created: latency-svc-vxqg9 Jan 26 13:47:07.622: INFO: Got endpoints: latency-svc-vxqg9 [1.324440731s] Jan 26 13:47:07.816: INFO: Created: latency-svc-q5wlh Jan 26 13:47:07.852: INFO: Got endpoints: latency-svc-q5wlh [1.46726085s] Jan 26 13:47:07.887: INFO: Created: latency-svc-swblb Jan 26 13:47:07.903: INFO: Got endpoints: latency-svc-swblb [1.488801437s] Jan 26 13:47:08.068: INFO: Created: latency-svc-l84zn Jan 26 13:47:08.071: INFO: Got endpoints: latency-svc-l84zn [1.604034174s] Jan 26 13:47:08.249: INFO: Created: latency-svc-9b2jd Jan 26 13:47:08.282: INFO: Got endpoints: latency-svc-9b2jd [1.723217478s] Jan 26 13:47:08.288: INFO: Created: latency-svc-sbg2z Jan 26 13:47:08.291: INFO: Got endpoints: latency-svc-sbg2z [1.668005881s] Jan 26 13:47:08.398: INFO: Created: latency-svc-492kk Jan 26 13:47:08.402: INFO: Got endpoints: latency-svc-492kk [1.658994688s] Jan 26 13:47:08.451: INFO: Created: latency-svc-4l6dl Jan 26 13:47:08.451: INFO: Got endpoints: latency-svc-4l6dl [1.661973885s] Jan 26 13:47:08.565: INFO: Created: latency-svc-89vkp Jan 26 13:47:08.577: INFO: Got endpoints: latency-svc-89vkp [1.646726123s] Jan 26 13:47:08.617: INFO: Created: latency-svc-s5nlz Jan 26 13:47:08.626: INFO: Got endpoints: latency-svc-s5nlz [1.692470022s] Jan 26 13:47:08.761: INFO: Created: latency-svc-h5khz Jan 26 13:47:08.774: INFO: Got endpoints: latency-svc-h5khz [1.802182217s] Jan 26 13:47:08.840: INFO: Created: latency-svc-bwwb6 Jan 26 13:47:08.891: INFO: Got endpoints: latency-svc-bwwb6 [1.75332344s] Jan 26 13:47:08.933: INFO: Created: latency-svc-kpbdn Jan 26 13:47:08.980: INFO: Created: latency-svc-c8jb6 Jan 26 13:47:08.981: INFO: Got endpoints: latency-svc-kpbdn [1.768249541s] Jan 26 13:47:08.996: INFO: Got endpoints: latency-svc-c8jb6 [1.597097618s] Jan 26 13:47:09.093: INFO: Created: latency-svc-6nt5b Jan 26 13:47:09.102: INFO: Got endpoints: latency-svc-6nt5b [1.631252568s] Jan 26 13:47:09.249: INFO: Created: latency-svc-sgntt Jan 26 13:47:09.267: INFO: Got endpoints: latency-svc-sgntt [1.644988553s] Jan 26 13:47:09.345: INFO: Created: latency-svc-c9xmb Jan 26 13:47:09.455: INFO: Got endpoints: latency-svc-c9xmb [1.603135747s] Jan 26 13:47:09.460: INFO: Created: latency-svc-xdjfs Jan 26 13:47:09.472: INFO: Got endpoints: latency-svc-xdjfs [1.569669584s] Jan 26 13:47:09.633: INFO: Created: latency-svc-pjfqh Jan 26 13:47:09.645: INFO: Got endpoints: latency-svc-pjfqh [1.573612558s] Jan 26 13:47:09.813: INFO: Created: latency-svc-b9tcp Jan 26 13:47:09.818: INFO: Got endpoints: latency-svc-b9tcp [1.535736241s] Jan 26 13:47:09.913: INFO: Created: latency-svc-ksszc Jan 26 13:47:10.001: INFO: Got endpoints: latency-svc-ksszc [1.710342441s] Jan 26 13:47:10.060: INFO: Created: latency-svc-twh4l Jan 26 13:47:10.095: INFO: Got endpoints: latency-svc-twh4l [1.693590085s] Jan 26 13:47:10.099: INFO: Created: latency-svc-htqgt Jan 26 13:47:10.160: INFO: Got endpoints: latency-svc-htqgt [1.70867242s] Jan 26 13:47:10.195: INFO: Created: latency-svc-l8gc7 Jan 26 13:47:10.217: INFO: Got endpoints: latency-svc-l8gc7 [1.639969763s] Jan 26 13:47:10.243: INFO: Created: latency-svc-gdwbr Jan 26 13:47:10.310: INFO: Got endpoints: latency-svc-gdwbr [1.68312564s] Jan 26 13:47:10.343: INFO: Created: latency-svc-zzfn9 Jan 26 13:47:10.358: INFO: Got endpoints: latency-svc-zzfn9 [1.58451541s] Jan 26 13:47:10.393: INFO: Created: latency-svc-bg6k2 Jan 26 13:47:10.458: INFO: Got endpoints: latency-svc-bg6k2 [1.566647332s] Jan 26 13:47:10.504: INFO: Created: latency-svc-7825g Jan 26 13:47:10.517: INFO: Got endpoints: latency-svc-7825g [1.535953039s] Jan 26 13:47:10.560: INFO: Created: latency-svc-qj5vj Jan 26 13:47:10.633: INFO: Got endpoints: latency-svc-qj5vj [1.63635363s] Jan 26 13:47:10.679: INFO: Created: latency-svc-hwkbk Jan 26 13:47:10.696: INFO: Got endpoints: latency-svc-hwkbk [1.594591327s] Jan 26 13:47:10.865: INFO: Created: latency-svc-r96js Jan 26 13:47:10.868: INFO: Got endpoints: latency-svc-r96js [1.600819498s] Jan 26 13:47:11.120: INFO: Created: latency-svc-gl5jx Jan 26 13:47:11.139: INFO: Got endpoints: latency-svc-gl5jx [1.682941266s] Jan 26 13:47:11.212: INFO: Created: latency-svc-bncs8 Jan 26 13:47:11.436: INFO: Created: latency-svc-jc5ph Jan 26 13:47:11.437: INFO: Got endpoints: latency-svc-bncs8 [1.963617709s] Jan 26 13:47:11.593: INFO: Got endpoints: latency-svc-jc5ph [1.948465501s] Jan 26 13:47:11.604: INFO: Created: latency-svc-wfmjf Jan 26 13:47:11.634: INFO: Got endpoints: latency-svc-wfmjf [1.815546533s] Jan 26 13:47:11.798: INFO: Created: latency-svc-szjp2 Jan 26 13:47:11.815: INFO: Got endpoints: latency-svc-szjp2 [1.81362317s] Jan 26 13:47:11.892: INFO: Created: latency-svc-fm58c Jan 26 13:47:11.955: INFO: Got endpoints: latency-svc-fm58c [1.859668206s] Jan 26 13:47:11.972: INFO: Created: latency-svc-l8nd6 Jan 26 13:47:11.977: INFO: Got endpoints: latency-svc-l8nd6 [1.817009431s] Jan 26 13:47:12.013: INFO: Created: latency-svc-4f5h4 Jan 26 13:47:12.107: INFO: Created: latency-svc-nnn28 Jan 26 13:47:12.107: INFO: Got endpoints: latency-svc-4f5h4 [1.890224076s] Jan 26 13:47:12.127: INFO: Got endpoints: latency-svc-nnn28 [1.816928943s] Jan 26 13:47:12.164: INFO: Created: latency-svc-7zljj Jan 26 13:47:12.182: INFO: Got endpoints: latency-svc-7zljj [1.823034178s] Jan 26 13:47:12.182: INFO: Latencies: [141.168288ms 172.317183ms 202.57525ms 235.862007ms 387.386155ms 471.757506ms 556.512195ms 596.350751ms 726.191106ms 735.779707ms 780.103294ms 890.800313ms 939.834546ms 1.089480857s 1.120417453s 1.124653245s 1.166017133s 1.179206913s 1.188627464s 1.193761131s 1.205199721s 1.217542136s 1.221140958s 1.224011123s 1.228496883s 1.236287323s 1.237052307s 1.256337963s 1.257421012s 1.258878631s 1.259874407s 1.262948906s 1.286517206s 1.289397634s 1.289668069s 1.29114646s 1.295096185s 1.299454845s 1.301966698s 1.303121153s 1.303276311s 1.303431572s 1.307966621s 1.31278697s 1.313487414s 1.316343658s 1.316404616s 1.320738657s 1.324440731s 1.3269174s 1.335420002s 1.343216041s 1.361802996s 1.365222145s 1.365760011s 1.371496829s 1.391372395s 1.39283837s 1.405268645s 1.420051136s 1.435823601s 1.440052243s 1.447821468s 1.45941158s 1.460989857s 1.462977639s 1.46726085s 1.471007296s 1.478816717s 1.487242594s 1.488801437s 1.499046493s 1.500725908s 1.506597149s 1.507394803s 1.51985622s 1.523604089s 1.523800081s 1.52611841s 1.53410647s 1.535736241s 1.535953039s 1.537782776s 1.539036624s 1.543208124s 1.550684108s 1.552498953s 1.552778678s 1.566647332s 1.569669584s 1.573612558s 1.58451541s 1.584699454s 1.586980354s 1.587994804s 1.589238685s 1.594591327s 1.597097618s 1.600819498s 1.600950043s 1.603135747s 1.604034174s 1.609199542s 1.61044062s 1.618275072s 1.625272493s 1.627652224s 1.631252568s 1.633679093s 1.636065825s 1.63635363s 1.639734107s 1.639969763s 1.640831936s 1.644988553s 1.646726123s 1.651705235s 1.656928791s 1.658994688s 1.661973885s 1.662429444s 1.662991111s 1.666663645s 1.668005881s 1.66929849s 1.672798643s 1.679722486s 1.681042168s 1.682941266s 1.68312564s 1.68536948s 1.692470022s 1.693590085s 1.693600556s 1.695087009s 1.701340824s 1.70867242s 1.708765035s 1.710342441s 1.712891471s 1.716703609s 1.717766575s 1.720356377s 1.723071051s 1.723217478s 1.723345682s 1.741950834s 1.74563146s 1.75332344s 1.764912681s 1.768249541s 1.774523772s 1.777498186s 1.788662692s 1.792180709s 1.795672744s 1.802182217s 1.803103213s 1.807610386s 1.810096524s 1.813186768s 1.81362317s 1.81405099s 1.815346358s 1.815546533s 1.816928943s 1.817009431s 1.823034178s 1.844886363s 1.852275586s 1.85467755s 1.855697647s 1.857324352s 1.859668206s 1.878242684s 1.885616713s 1.890224076s 1.898749212s 1.911198904s 1.938002308s 1.948465501s 1.963617709s 2.015382831s 2.031667066s 2.044922374s 2.113213728s 2.1193036s 2.153181699s 2.254184971s 2.28209252s 2.34527329s 2.369629723s 2.37095811s 2.375344575s 2.402534295s 2.449853685s 2.508717463s 2.545512257s 2.588940167s 2.641808106s] Jan 26 13:47:12.182: INFO: 50 %ile: 1.603135747s Jan 26 13:47:12.183: INFO: 90 %ile: 1.948465501s Jan 26 13:47:12.183: INFO: 99 %ile: 2.588940167s Jan 26 13:47:12.183: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:47:12.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5792" for this suite. Jan 26 13:47:46.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:47:46.420: INFO: namespace svc-latency-5792 deletion completed in 34.226013349s • [SLOW TEST:65.274 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:47:46.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 26 13:47:46.523: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:47:54.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-847" for this suite. Jan 26 13:48:36.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:48:36.873: INFO: namespace pods-847 deletion completed in 42.205855936s • [SLOW TEST:50.453 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:48:36.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jan 26 13:48:37.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 26 13:48:39.175: INFO: stderr: "" Jan 26 13:48:39.175: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:48:39.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4569" for this suite. Jan 26 13:48:45.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:48:45.366: INFO: namespace kubectl-4569 deletion completed in 6.181004061s • [SLOW TEST:8.491 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:48:45.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:49:45.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4655" for this suite. Jan 26 13:50:07.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:50:07.709: INFO: namespace container-probe-4655 deletion completed in 22.214306264s • [SLOW TEST:82.343 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:50:07.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 26 13:50:07.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-7638' Jan 26 13:50:07.952: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 26 13:50:07.952: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jan 26 13:50:09.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7638' Jan 26 13:50:10.293: INFO: stderr: "" Jan 26 13:50:10.294: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:50:10.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7638" for this suite. Jan 26 13:50:16.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:50:16.457: INFO: namespace kubectl-7638 deletion completed in 6.155568448s • [SLOW TEST:8.746 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:50:16.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 26 13:50:16.540: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f28e29a7-30ce-4952-809e-606e34705679" in namespace "downward-api-1453" to be "success or failure" Jan 26 13:50:16.594: INFO: Pod "downwardapi-volume-f28e29a7-30ce-4952-809e-606e34705679": Phase="Pending", Reason="", readiness=false. Elapsed: 54.591904ms Jan 26 13:50:18.612: INFO: Pod "downwardapi-volume-f28e29a7-30ce-4952-809e-606e34705679": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071999251s Jan 26 13:50:20.634: INFO: Pod "downwardapi-volume-f28e29a7-30ce-4952-809e-606e34705679": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09412437s Jan 26 13:50:22.644: INFO: Pod "downwardapi-volume-f28e29a7-30ce-4952-809e-606e34705679": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104276686s Jan 26 13:50:24.656: INFO: Pod "downwardapi-volume-f28e29a7-30ce-4952-809e-606e34705679": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116022042s STEP: Saw pod success Jan 26 13:50:24.656: INFO: Pod "downwardapi-volume-f28e29a7-30ce-4952-809e-606e34705679" satisfied condition "success or failure" Jan 26 13:50:24.660: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f28e29a7-30ce-4952-809e-606e34705679 container client-container: STEP: delete the pod Jan 26 13:50:24.739: INFO: Waiting for pod downwardapi-volume-f28e29a7-30ce-4952-809e-606e34705679 to disappear Jan 26 13:50:24.810: INFO: Pod downwardapi-volume-f28e29a7-30ce-4952-809e-606e34705679 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:50:24.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1453" for this suite. Jan 26 13:50:30.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:50:30.992: INFO: namespace downward-api-1453 deletion completed in 6.177183673s • [SLOW TEST:14.535 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:50:30.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-9245f746-569b-42a8-8e17-84ad477f6608 STEP: Creating a pod to test consume configMaps Jan 26 13:50:31.085: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6a1fae76-9d76-4a4e-bea1-4e5479035fca" in namespace "projected-8696" to be "success or failure" Jan 26 13:50:31.145: INFO: Pod "pod-projected-configmaps-6a1fae76-9d76-4a4e-bea1-4e5479035fca": Phase="Pending", Reason="", readiness=false. Elapsed: 59.269329ms Jan 26 13:50:33.158: INFO: Pod "pod-projected-configmaps-6a1fae76-9d76-4a4e-bea1-4e5479035fca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07294574s Jan 26 13:50:35.168: INFO: Pod "pod-projected-configmaps-6a1fae76-9d76-4a4e-bea1-4e5479035fca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082277685s Jan 26 13:50:37.176: INFO: Pod "pod-projected-configmaps-6a1fae76-9d76-4a4e-bea1-4e5479035fca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090482334s Jan 26 13:50:39.191: INFO: Pod "pod-projected-configmaps-6a1fae76-9d76-4a4e-bea1-4e5479035fca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105467148s STEP: Saw pod success Jan 26 13:50:39.191: INFO: Pod "pod-projected-configmaps-6a1fae76-9d76-4a4e-bea1-4e5479035fca" satisfied condition "success or failure" Jan 26 13:50:39.195: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-6a1fae76-9d76-4a4e-bea1-4e5479035fca container projected-configmap-volume-test: STEP: delete the pod Jan 26 13:50:39.360: INFO: Waiting for pod pod-projected-configmaps-6a1fae76-9d76-4a4e-bea1-4e5479035fca to disappear Jan 26 13:50:39.372: INFO: Pod pod-projected-configmaps-6a1fae76-9d76-4a4e-bea1-4e5479035fca no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:50:39.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8696" for this suite. Jan 26 13:50:45.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:50:45.579: INFO: namespace projected-8696 deletion completed in 6.198164058s • [SLOW TEST:14.586 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:50:45.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-79695e84-98fc-4290-b241-da3fff5a321d in namespace container-probe-5904 Jan 26 13:50:53.759: INFO: Started pod liveness-79695e84-98fc-4290-b241-da3fff5a321d in namespace container-probe-5904 STEP: checking the pod's current state and verifying that restartCount is present Jan 26 13:50:53.765: INFO: Initial restart count of pod liveness-79695e84-98fc-4290-b241-da3fff5a321d is 0 Jan 26 13:51:15.971: INFO: Restart count of pod container-probe-5904/liveness-79695e84-98fc-4290-b241-da3fff5a321d is now 1 (22.205922199s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:51:16.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5904" for this suite. Jan 26 13:51:22.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:51:22.197: INFO: namespace container-probe-5904 deletion completed in 6.183372246s • [SLOW TEST:36.618 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:51:22.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4186.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4186.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4186.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4186.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4186.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4186.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4186.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4186.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4186.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 178.52.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.52.178_udp@PTR;check="$$(dig +tcp +noall +answer +search 178.52.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.52.178_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4186.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4186.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4186.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4186.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4186.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4186.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4186.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4186.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4186.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4186.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 178.52.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.52.178_udp@PTR;check="$$(dig +tcp +noall +answer +search 178.52.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.52.178_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 26 13:51:34.518: INFO: Unable to read wheezy_udp@dns-test-service.dns-4186.svc.cluster.local from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.527: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4186.svc.cluster.local from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.535: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4186.svc.cluster.local from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.542: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4186.svc.cluster.local from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.549: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-4186.svc.cluster.local from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.556: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-4186.svc.cluster.local from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.560: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.565: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.569: INFO: Unable to read 10.107.52.178_udp@PTR from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.574: INFO: Unable to read 10.107.52.178_tcp@PTR from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.578: INFO: Unable to read jessie_udp@dns-test-service.dns-4186.svc.cluster.local from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.583: INFO: Unable to read jessie_tcp@dns-test-service.dns-4186.svc.cluster.local from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.589: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4186.svc.cluster.local from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.615: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4186.svc.cluster.local from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.663: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-4186.svc.cluster.local from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.668: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-4186.svc.cluster.local from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.674: INFO: Unable to read jessie_udp@PodARecord from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.678: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.683: INFO: Unable to read 10.107.52.178_udp@PTR from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.687: INFO: Unable to read 10.107.52.178_tcp@PTR from pod dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5: the server could not find the requested resource (get pods dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5) Jan 26 13:51:34.687: INFO: Lookups using dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5 failed for: [wheezy_udp@dns-test-service.dns-4186.svc.cluster.local wheezy_tcp@dns-test-service.dns-4186.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4186.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4186.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-4186.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-4186.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.107.52.178_udp@PTR 10.107.52.178_tcp@PTR jessie_udp@dns-test-service.dns-4186.svc.cluster.local jessie_tcp@dns-test-service.dns-4186.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4186.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4186.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-4186.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-4186.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.107.52.178_udp@PTR 10.107.52.178_tcp@PTR] Jan 26 13:51:39.887: INFO: DNS probes using dns-4186/dns-test-7b29187c-56d7-4d4c-973d-16a834fcc9c5 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:51:40.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4186" for this suite. Jan 26 13:51:46.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:51:46.479: INFO: namespace dns-4186 deletion completed in 6.165669915s • [SLOW TEST:24.282 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:51:46.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jan 26 13:51:46.612: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 26 13:51:46.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1522' Jan 26 13:51:47.087: INFO: stderr: "" Jan 26 13:51:47.087: INFO: stdout: "service/redis-slave created\n" Jan 26 13:51:47.088: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 26 13:51:47.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1522' Jan 26 13:51:47.568: INFO: stderr: "" Jan 26 13:51:47.568: INFO: stdout: "service/redis-master created\n" Jan 26 13:51:47.569: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 26 13:51:47.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1522' Jan 26 13:51:48.095: INFO: stderr: "" Jan 26 13:51:48.095: INFO: stdout: "service/frontend created\n" Jan 26 13:51:48.096: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 26 13:51:48.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1522' Jan 26 13:51:48.676: INFO: stderr: "" Jan 26 13:51:48.676: INFO: stdout: "deployment.apps/frontend created\n" Jan 26 13:51:48.676: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 26 13:51:48.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1522' Jan 26 13:51:50.532: INFO: stderr: "" Jan 26 13:51:50.532: INFO: stdout: "deployment.apps/redis-master created\n" Jan 26 13:51:50.533: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 26 13:51:50.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1522' Jan 26 13:51:51.415: INFO: stderr: "" Jan 26 13:51:51.415: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jan 26 13:51:51.415: INFO: Waiting for all frontend pods to be Running. Jan 26 13:52:11.467: INFO: Waiting for frontend to serve content. Jan 26 13:52:11.612: INFO: Trying to add a new entry to the guestbook. Jan 26 13:52:11.647: INFO: Verifying that added entry can be retrieved. Jan 26 13:52:11.692: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources Jan 26 13:52:16.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1522' Jan 26 13:52:16.926: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 26 13:52:16.926: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 26 13:52:16.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1522' Jan 26 13:52:17.115: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 26 13:52:17.115: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 26 13:52:17.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1522' Jan 26 13:52:17.274: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 26 13:52:17.274: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 26 13:52:17.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1522' Jan 26 13:52:17.427: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 26 13:52:17.427: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 26 13:52:17.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1522' Jan 26 13:52:17.585: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 26 13:52:17.585: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 26 13:52:17.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1522' Jan 26 13:52:17.702: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 26 13:52:17.702: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:52:17.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1522" for this suite. Jan 26 13:53:11.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:53:11.907: INFO: namespace kubectl-1522 deletion completed in 54.202104547s • [SLOW TEST:85.427 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:53:11.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 26 13:53:12.067: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:53:36.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9658" for this suite. Jan 26 13:53:42.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:53:42.784: INFO: namespace pods-9658 deletion completed in 6.231385645s • [SLOW TEST:30.877 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:53:42.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 26 13:53:42.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-37bb1d66-a1cb-48c8-ac94-4bc41ba9d7ae" in namespace "projected-2594" to be "success or failure" Jan 26 13:53:42.919: INFO: Pod "downwardapi-volume-37bb1d66-a1cb-48c8-ac94-4bc41ba9d7ae": Phase="Pending", Reason="", readiness=false. Elapsed: 5.666273ms Jan 26 13:53:44.933: INFO: Pod "downwardapi-volume-37bb1d66-a1cb-48c8-ac94-4bc41ba9d7ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019978327s Jan 26 13:53:46.976: INFO: Pod "downwardapi-volume-37bb1d66-a1cb-48c8-ac94-4bc41ba9d7ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062878292s Jan 26 13:53:48.987: INFO: Pod "downwardapi-volume-37bb1d66-a1cb-48c8-ac94-4bc41ba9d7ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073823661s Jan 26 13:53:50.996: INFO: Pod "downwardapi-volume-37bb1d66-a1cb-48c8-ac94-4bc41ba9d7ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083530778s STEP: Saw pod success Jan 26 13:53:50.997: INFO: Pod "downwardapi-volume-37bb1d66-a1cb-48c8-ac94-4bc41ba9d7ae" satisfied condition "success or failure" Jan 26 13:53:51.001: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-37bb1d66-a1cb-48c8-ac94-4bc41ba9d7ae container client-container: STEP: delete the pod Jan 26 13:53:51.048: INFO: Waiting for pod downwardapi-volume-37bb1d66-a1cb-48c8-ac94-4bc41ba9d7ae to disappear Jan 26 13:53:51.052: INFO: Pod downwardapi-volume-37bb1d66-a1cb-48c8-ac94-4bc41ba9d7ae no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:53:51.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2594" for this suite. Jan 26 13:53:57.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:53:57.220: INFO: namespace projected-2594 deletion completed in 6.16349795s • [SLOW TEST:14.435 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:53:57.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 26 13:54:06.520: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:54:06.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8278" for this suite. Jan 26 13:54:28.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:54:28.836: INFO: namespace replicaset-8278 deletion completed in 22.176107422s • [SLOW TEST:31.614 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:54:28.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-1617b65f-05f8-4db3-bb3c-0c5196e86fc9 STEP: Creating a pod to test consume configMaps Jan 26 13:54:28.943: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bf7ed81f-429c-4c9e-801d-87b3427b0fb2" in namespace "projected-3819" to be "success or failure" Jan 26 13:54:28.995: INFO: Pod "pod-projected-configmaps-bf7ed81f-429c-4c9e-801d-87b3427b0fb2": Phase="Pending", Reason="", readiness=false. Elapsed: 51.905855ms Jan 26 13:54:31.007: INFO: Pod "pod-projected-configmaps-bf7ed81f-429c-4c9e-801d-87b3427b0fb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064123552s Jan 26 13:54:33.054: INFO: Pod "pod-projected-configmaps-bf7ed81f-429c-4c9e-801d-87b3427b0fb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110782568s Jan 26 13:54:35.061: INFO: Pod "pod-projected-configmaps-bf7ed81f-429c-4c9e-801d-87b3427b0fb2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117877539s Jan 26 13:54:37.085: INFO: Pod "pod-projected-configmaps-bf7ed81f-429c-4c9e-801d-87b3427b0fb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.141960758s STEP: Saw pod success Jan 26 13:54:37.085: INFO: Pod "pod-projected-configmaps-bf7ed81f-429c-4c9e-801d-87b3427b0fb2" satisfied condition "success or failure" Jan 26 13:54:37.090: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-bf7ed81f-429c-4c9e-801d-87b3427b0fb2 container projected-configmap-volume-test: STEP: delete the pod Jan 26 13:54:37.310: INFO: Waiting for pod pod-projected-configmaps-bf7ed81f-429c-4c9e-801d-87b3427b0fb2 to disappear Jan 26 13:54:37.328: INFO: Pod pod-projected-configmaps-bf7ed81f-429c-4c9e-801d-87b3427b0fb2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:54:37.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3819" for this suite. Jan 26 13:54:43.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:54:43.477: INFO: namespace projected-3819 deletion completed in 6.141109467s • [SLOW TEST:14.640 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:54:43.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 26 13:54:43.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8559' Jan 26 13:54:43.771: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 26 13:54:43.771: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jan 26 13:54:44.025: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jan 26 13:54:44.040: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 26 13:54:44.073: INFO: scanned /root for discovery docs: Jan 26 13:54:44.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-8559' Jan 26 13:55:04.752: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 26 13:55:04.752: INFO: stdout: "Created e2e-test-nginx-rc-3f7ab2b3b0c28b382c8e22f9cac31bea\nScaling up e2e-test-nginx-rc-3f7ab2b3b0c28b382c8e22f9cac31bea from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-3f7ab2b3b0c28b382c8e22f9cac31bea up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-3f7ab2b3b0c28b382c8e22f9cac31bea to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jan 26 13:55:04.752: INFO: stdout: "Created e2e-test-nginx-rc-3f7ab2b3b0c28b382c8e22f9cac31bea\nScaling up e2e-test-nginx-rc-3f7ab2b3b0c28b382c8e22f9cac31bea from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-3f7ab2b3b0c28b382c8e22f9cac31bea up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-3f7ab2b3b0c28b382c8e22f9cac31bea to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jan 26 13:55:04.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8559' Jan 26 13:55:04.930: INFO: stderr: "" Jan 26 13:55:04.930: INFO: stdout: "e2e-test-nginx-rc-3f7ab2b3b0c28b382c8e22f9cac31bea-gvgvx e2e-test-nginx-rc-pm8b6 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 26 13:55:09.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8559' Jan 26 13:55:10.120: INFO: stderr: "" Jan 26 13:55:10.120: INFO: stdout: "e2e-test-nginx-rc-3f7ab2b3b0c28b382c8e22f9cac31bea-gvgvx " Jan 26 13:55:10.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-3f7ab2b3b0c28b382c8e22f9cac31bea-gvgvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8559' Jan 26 13:55:10.214: INFO: stderr: "" Jan 26 13:55:10.214: INFO: stdout: "true" Jan 26 13:55:10.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-3f7ab2b3b0c28b382c8e22f9cac31bea-gvgvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8559' Jan 26 13:55:10.302: INFO: stderr: "" Jan 26 13:55:10.302: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jan 26 13:55:10.302: INFO: e2e-test-nginx-rc-3f7ab2b3b0c28b382c8e22f9cac31bea-gvgvx is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jan 26 13:55:10.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8559' Jan 26 13:55:10.498: INFO: stderr: "" Jan 26 13:55:10.498: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:55:10.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8559" for this suite. Jan 26 13:55:32.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:55:32.703: INFO: namespace kubectl-8559 deletion completed in 22.13999835s • [SLOW TEST:49.226 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:55:32.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Jan 26 13:55:32.802: INFO: Waiting up to 5m0s for pod "pod-adefeeab-1706-45d5-a6b1-a26984f511fb" in namespace "emptydir-8936" to be "success or failure" Jan 26 13:55:32.819: INFO: Pod "pod-adefeeab-1706-45d5-a6b1-a26984f511fb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.483344ms Jan 26 13:55:35.256: INFO: Pod "pod-adefeeab-1706-45d5-a6b1-a26984f511fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.453470233s Jan 26 13:55:37.271: INFO: Pod "pod-adefeeab-1706-45d5-a6b1-a26984f511fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.46822098s Jan 26 13:55:39.278: INFO: Pod "pod-adefeeab-1706-45d5-a6b1-a26984f511fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.475474584s Jan 26 13:55:41.285: INFO: Pod "pod-adefeeab-1706-45d5-a6b1-a26984f511fb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.482752894s Jan 26 13:55:43.295: INFO: Pod "pod-adefeeab-1706-45d5-a6b1-a26984f511fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.49221876s STEP: Saw pod success Jan 26 13:55:43.295: INFO: Pod "pod-adefeeab-1706-45d5-a6b1-a26984f511fb" satisfied condition "success or failure" Jan 26 13:55:43.300: INFO: Trying to get logs from node iruya-node pod pod-adefeeab-1706-45d5-a6b1-a26984f511fb container test-container: STEP: delete the pod Jan 26 13:55:43.483: INFO: Waiting for pod pod-adefeeab-1706-45d5-a6b1-a26984f511fb to disappear Jan 26 13:55:43.487: INFO: Pod pod-adefeeab-1706-45d5-a6b1-a26984f511fb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:55:43.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8936" for this suite. Jan 26 13:55:49.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:55:49.640: INFO: namespace emptydir-8936 deletion completed in 6.148558511s • [SLOW TEST:16.936 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:55:49.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-137, will wait for the garbage collector to delete the pods Jan 26 13:55:59.822: INFO: Deleting Job.batch foo took: 44.175767ms Jan 26 13:56:00.123: INFO: Terminating Job.batch foo pods took: 300.937984ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:56:46.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-137" for this suite. Jan 26 13:56:52.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:56:53.002: INFO: namespace job-137 deletion completed in 6.229914772s • [SLOW TEST:63.362 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:56:53.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-e9f1e685-d643-4739-845b-6f75d67bee9f [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:56:53.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7991" for this suite. Jan 26 13:56:59.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:56:59.376: INFO: namespace configmap-7991 deletion completed in 6.189778096s • [SLOW TEST:6.373 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:56:59.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jan 26 13:56:59.500: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:56:59.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4029" for this suite. Jan 26 13:57:05.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:57:05.862: INFO: namespace kubectl-4029 deletion completed in 6.251270377s • [SLOW TEST:6.485 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:57:05.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-dpfh STEP: Creating a pod to test atomic-volume-subpath Jan 26 13:57:06.035: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dpfh" in namespace "subpath-8803" to be "success or failure" Jan 26 13:57:06.039: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096961ms Jan 26 13:57:08.045: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009417802s Jan 26 13:57:10.053: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017348447s Jan 26 13:57:12.060: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024937684s Jan 26 13:57:14.073: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037559461s Jan 26 13:57:16.082: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Running", Reason="", readiness=true. Elapsed: 10.046514024s Jan 26 13:57:18.096: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Running", Reason="", readiness=true. Elapsed: 12.060547122s Jan 26 13:57:20.105: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Running", Reason="", readiness=true. Elapsed: 14.069664309s Jan 26 13:57:22.137: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Running", Reason="", readiness=true. Elapsed: 16.101230662s Jan 26 13:57:24.145: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Running", Reason="", readiness=true. Elapsed: 18.11011758s Jan 26 13:57:26.156: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Running", Reason="", readiness=true. Elapsed: 20.12045478s Jan 26 13:57:28.166: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Running", Reason="", readiness=true. Elapsed: 22.130341537s Jan 26 13:57:30.174: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Running", Reason="", readiness=true. Elapsed: 24.138433826s Jan 26 13:57:32.182: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Running", Reason="", readiness=true. Elapsed: 26.14695021s Jan 26 13:57:34.191: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Running", Reason="", readiness=true. Elapsed: 28.155523404s Jan 26 13:57:36.197: INFO: Pod "pod-subpath-test-configmap-dpfh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.161387614s STEP: Saw pod success Jan 26 13:57:36.197: INFO: Pod "pod-subpath-test-configmap-dpfh" satisfied condition "success or failure" Jan 26 13:57:36.200: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-dpfh container test-container-subpath-configmap-dpfh: STEP: delete the pod Jan 26 13:57:36.250: INFO: Waiting for pod pod-subpath-test-configmap-dpfh to disappear Jan 26 13:57:36.285: INFO: Pod pod-subpath-test-configmap-dpfh no longer exists STEP: Deleting pod pod-subpath-test-configmap-dpfh Jan 26 13:57:36.285: INFO: Deleting pod "pod-subpath-test-configmap-dpfh" in namespace "subpath-8803" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:57:36.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8803" for this suite. Jan 26 13:57:42.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:57:42.457: INFO: namespace subpath-8803 deletion completed in 6.155417712s • [SLOW TEST:36.595 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:57:42.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 26 13:57:42.618: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 26 13:57:42.637: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 26 13:57:47.653: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 26 13:57:49.665: INFO: Creating deployment "test-rolling-update-deployment" Jan 26 13:57:49.672: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 26 13:57:50.098: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 26 13:57:52.113: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 26 13:57:52.119: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715643870, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715643870, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715643870, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715643870, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:57:54.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715643870, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715643870, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715643870, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715643870, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:57:56.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715643870, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715643870, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715643870, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715643870, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 13:57:58.128: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 26 13:57:58.145: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-1473,SelfLink:/apis/apps/v1/namespaces/deployment-1473/deployments/test-rolling-update-deployment,UID:6e62a81c-1a53-47e3-a27e-82e58a2f3832,ResourceVersion:21943194,Generation:1,CreationTimestamp:2020-01-26 13:57:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-26 13:57:50 +0000 UTC 2020-01-26 13:57:50 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-26 13:57:57 +0000 UTC 2020-01-26 13:57:50 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 26 13:57:58.150: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-1473,SelfLink:/apis/apps/v1/namespaces/deployment-1473/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:e77e66a8-8699-400c-b158-4a5f6d6ba757,ResourceVersion:21943183,Generation:1,CreationTimestamp:2020-01-26 13:57:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6e62a81c-1a53-47e3-a27e-82e58a2f3832 0xc002b5deb7 0xc002b5deb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 26 13:57:58.150: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 26 13:57:58.150: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-1473,SelfLink:/apis/apps/v1/namespaces/deployment-1473/replicasets/test-rolling-update-controller,UID:db5aa96d-a332-432a-8afb-7783b5acb2ff,ResourceVersion:21943192,Generation:2,CreationTimestamp:2020-01-26 13:57:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6e62a81c-1a53-47e3-a27e-82e58a2f3832 0xc002b5dde7 0xc002b5dde8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 26 13:57:58.155: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-l9qv5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-l9qv5,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-1473,SelfLink:/api/v1/namespaces/deployment-1473/pods/test-rolling-update-deployment-79f6b9d75c-l9qv5,UID:9d929bec-76ad-40e4-a8e4-9df98276a07c,ResourceVersion:21943182,Generation:0,CreationTimestamp:2020-01-26 13:57:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c e77e66a8-8699-400c-b158-4a5f6d6ba757 0xc000af92c7 0xc000af92c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bwvgq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bwvgq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-bwvgq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000af9440} {node.kubernetes.io/unreachable Exists NoExecute 0xc000af9460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:57:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:57:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:57:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 13:57:50 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-26 13:57:50 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-26 13:57:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://534c71c7b6afb9117e44b0583cb34995216915a1f88d0fc97d763fae04af9c3b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:57:58.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1473" for this suite. Jan 26 13:58:04.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:58:04.266: INFO: namespace deployment-1473 deletion completed in 6.104668476s • [SLOW TEST:21.809 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:58:04.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-2ab0ba47-6b6c-4048-91a9-79a807ca6aae STEP: Creating a pod to test consume secrets Jan 26 13:58:04.454: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-901eeeef-5d71-442d-85ad-8c0f4d4b3aec" in namespace "projected-4957" to be "success or failure" Jan 26 13:58:04.588: INFO: Pod "pod-projected-secrets-901eeeef-5d71-442d-85ad-8c0f4d4b3aec": Phase="Pending", Reason="", readiness=false. Elapsed: 133.816326ms Jan 26 13:58:06.607: INFO: Pod "pod-projected-secrets-901eeeef-5d71-442d-85ad-8c0f4d4b3aec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152649065s Jan 26 13:58:08.630: INFO: Pod "pod-projected-secrets-901eeeef-5d71-442d-85ad-8c0f4d4b3aec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175403617s Jan 26 13:58:10.641: INFO: Pod "pod-projected-secrets-901eeeef-5d71-442d-85ad-8c0f4d4b3aec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.187071377s Jan 26 13:58:12.654: INFO: Pod "pod-projected-secrets-901eeeef-5d71-442d-85ad-8c0f4d4b3aec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199854809s Jan 26 13:58:14.664: INFO: Pod "pod-projected-secrets-901eeeef-5d71-442d-85ad-8c0f4d4b3aec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.209382125s STEP: Saw pod success Jan 26 13:58:14.664: INFO: Pod "pod-projected-secrets-901eeeef-5d71-442d-85ad-8c0f4d4b3aec" satisfied condition "success or failure" Jan 26 13:58:14.668: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-901eeeef-5d71-442d-85ad-8c0f4d4b3aec container projected-secret-volume-test: STEP: delete the pod Jan 26 13:58:14.726: INFO: Waiting for pod pod-projected-secrets-901eeeef-5d71-442d-85ad-8c0f4d4b3aec to disappear Jan 26 13:58:14.746: INFO: Pod pod-projected-secrets-901eeeef-5d71-442d-85ad-8c0f4d4b3aec no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:58:14.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4957" for this suite. Jan 26 13:58:20.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:58:21.016: INFO: namespace projected-4957 deletion completed in 6.210085786s • [SLOW TEST:16.749 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:58:21.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 26 13:58:21.160: INFO: Waiting up to 5m0s for pod "downward-api-21abb2ce-e144-4b01-984b-e2f7c42ee217" in namespace "downward-api-3153" to be "success or failure" Jan 26 13:58:21.178: INFO: Pod "downward-api-21abb2ce-e144-4b01-984b-e2f7c42ee217": Phase="Pending", Reason="", readiness=false. Elapsed: 17.234199ms Jan 26 13:58:23.185: INFO: Pod "downward-api-21abb2ce-e144-4b01-984b-e2f7c42ee217": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024993326s Jan 26 13:58:25.205: INFO: Pod "downward-api-21abb2ce-e144-4b01-984b-e2f7c42ee217": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044481854s Jan 26 13:58:27.215: INFO: Pod "downward-api-21abb2ce-e144-4b01-984b-e2f7c42ee217": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054829817s Jan 26 13:58:29.223: INFO: Pod "downward-api-21abb2ce-e144-4b01-984b-e2f7c42ee217": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062917398s STEP: Saw pod success Jan 26 13:58:29.224: INFO: Pod "downward-api-21abb2ce-e144-4b01-984b-e2f7c42ee217" satisfied condition "success or failure" Jan 26 13:58:29.227: INFO: Trying to get logs from node iruya-node pod downward-api-21abb2ce-e144-4b01-984b-e2f7c42ee217 container dapi-container: STEP: delete the pod Jan 26 13:58:29.344: INFO: Waiting for pod downward-api-21abb2ce-e144-4b01-984b-e2f7c42ee217 to disappear Jan 26 13:58:29.353: INFO: Pod downward-api-21abb2ce-e144-4b01-984b-e2f7c42ee217 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:58:29.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3153" for this suite. Jan 26 13:58:35.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:58:35.533: INFO: namespace downward-api-3153 deletion completed in 6.172165554s • [SLOW TEST:14.517 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:58:35.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 26 13:58:35.732: INFO: Number of nodes with available pods: 0 Jan 26 13:58:35.732: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:58:37.279: INFO: Number of nodes with available pods: 0 Jan 26 13:58:37.280: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:58:37.986: INFO: Number of nodes with available pods: 0 Jan 26 13:58:37.986: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:58:39.024: INFO: Number of nodes with available pods: 0 Jan 26 13:58:39.024: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:58:39.771: INFO: Number of nodes with available pods: 0 Jan 26 13:58:39.771: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:58:40.757: INFO: Number of nodes with available pods: 0 Jan 26 13:58:40.757: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:58:43.311: INFO: Number of nodes with available pods: 0 Jan 26 13:58:43.311: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:58:43.772: INFO: Number of nodes with available pods: 0 Jan 26 13:58:43.772: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:58:45.071: INFO: Number of nodes with available pods: 0 Jan 26 13:58:45.071: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:58:45.833: INFO: Number of nodes with available pods: 0 Jan 26 13:58:45.834: INFO: Node iruya-node is running more than one daemon pod Jan 26 13:58:46.751: INFO: Number of nodes with available pods: 2 Jan 26 13:58:46.751: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 26 13:58:46.805: INFO: Number of nodes with available pods: 1 Jan 26 13:58:46.805: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:58:47.829: INFO: Number of nodes with available pods: 1 Jan 26 13:58:47.829: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:58:48.825: INFO: Number of nodes with available pods: 1 Jan 26 13:58:48.825: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:58:49.831: INFO: Number of nodes with available pods: 1 Jan 26 13:58:49.831: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:58:50.828: INFO: Number of nodes with available pods: 1 Jan 26 13:58:50.828: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:58:51.827: INFO: Number of nodes with available pods: 1 Jan 26 13:58:51.827: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:58:52.826: INFO: Number of nodes with available pods: 1 Jan 26 13:58:52.826: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:58:53.827: INFO: Number of nodes with available pods: 1 Jan 26 13:58:53.828: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:58:54.831: INFO: Number of nodes with available pods: 1 Jan 26 13:58:54.831: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:58:55.825: INFO: Number of nodes with available pods: 1 Jan 26 13:58:55.825: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:58:56.827: INFO: Number of nodes with available pods: 1 Jan 26 13:58:56.827: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:58:57.893: INFO: Number of nodes with available pods: 1 Jan 26 13:58:57.894: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:58:58.829: INFO: Number of nodes with available pods: 1 Jan 26 13:58:58.829: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:58:59.846: INFO: Number of nodes with available pods: 1 Jan 26 13:58:59.846: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:59:00.831: INFO: Number of nodes with available pods: 1 Jan 26 13:59:00.831: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:59:02.909: INFO: Number of nodes with available pods: 1 Jan 26 13:59:02.909: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:59:03.843: INFO: Number of nodes with available pods: 1 Jan 26 13:59:03.843: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:59:04.835: INFO: Number of nodes with available pods: 1 Jan 26 13:59:04.835: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 13:59:05.825: INFO: Number of nodes with available pods: 2 Jan 26 13:59:05.825: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2073, will wait for the garbage collector to delete the pods Jan 26 13:59:05.897: INFO: Deleting DaemonSet.extensions daemon-set took: 13.543909ms Jan 26 13:59:06.298: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.991478ms Jan 26 13:59:17.907: INFO: Number of nodes with available pods: 0 Jan 26 13:59:17.907: INFO: Number of running nodes: 0, number of available pods: 0 Jan 26 13:59:17.912: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2073/daemonsets","resourceVersion":"21943428"},"items":null} Jan 26 13:59:17.916: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2073/pods","resourceVersion":"21943428"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:59:17.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2073" for this suite. Jan 26 13:59:23.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:59:24.066: INFO: namespace daemonsets-2073 deletion completed in 6.131919327s • [SLOW TEST:48.531 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:59:24.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:59:32.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-75" for this suite. Jan 26 13:59:38.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:59:38.659: INFO: namespace kubelet-test-75 deletion completed in 6.461165763s • [SLOW TEST:14.593 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:59:38.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Jan 26 13:59:38.772: INFO: Waiting up to 5m0s for pod "var-expansion-cb724f3d-0256-4bbb-87af-abdd8d5793df" in namespace "var-expansion-7989" to be "success or failure" Jan 26 13:59:38.794: INFO: Pod "var-expansion-cb724f3d-0256-4bbb-87af-abdd8d5793df": Phase="Pending", Reason="", readiness=false. Elapsed: 22.546136ms Jan 26 13:59:40.801: INFO: Pod "var-expansion-cb724f3d-0256-4bbb-87af-abdd8d5793df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02913515s Jan 26 13:59:42.809: INFO: Pod "var-expansion-cb724f3d-0256-4bbb-87af-abdd8d5793df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03683933s Jan 26 13:59:44.840: INFO: Pod "var-expansion-cb724f3d-0256-4bbb-87af-abdd8d5793df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068192979s Jan 26 13:59:46.851: INFO: Pod "var-expansion-cb724f3d-0256-4bbb-87af-abdd8d5793df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079167531s STEP: Saw pod success Jan 26 13:59:46.851: INFO: Pod "var-expansion-cb724f3d-0256-4bbb-87af-abdd8d5793df" satisfied condition "success or failure" Jan 26 13:59:46.855: INFO: Trying to get logs from node iruya-node pod var-expansion-cb724f3d-0256-4bbb-87af-abdd8d5793df container dapi-container: STEP: delete the pod Jan 26 13:59:46.919: INFO: Waiting for pod var-expansion-cb724f3d-0256-4bbb-87af-abdd8d5793df to disappear Jan 26 13:59:47.078: INFO: Pod var-expansion-cb724f3d-0256-4bbb-87af-abdd8d5793df no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 13:59:47.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7989" for this suite. Jan 26 13:59:53.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 13:59:53.202: INFO: namespace var-expansion-7989 deletion completed in 6.117018399s • [SLOW TEST:14.542 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 13:59:53.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jan 26 14:00:03.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-b9c74b25-3c61-4c48-be92-d2a62f91d15a -c busybox-main-container --namespace=emptydir-6807 -- cat /usr/share/volumeshare/shareddata.txt' Jan 26 14:00:05.817: INFO: stderr: "I0126 14:00:05.481408 1046 log.go:172] (0xc000116000) (0xc0007081e0) Create stream\nI0126 14:00:05.481609 1046 log.go:172] (0xc000116000) (0xc0007081e0) Stream added, broadcasting: 1\nI0126 14:00:05.500517 1046 log.go:172] (0xc000116000) Reply frame received for 1\nI0126 14:00:05.500647 1046 log.go:172] (0xc000116000) (0xc000541ae0) Create stream\nI0126 14:00:05.500659 1046 log.go:172] (0xc000116000) (0xc000541ae0) Stream added, broadcasting: 3\nI0126 14:00:05.506478 1046 log.go:172] (0xc000116000) Reply frame received for 3\nI0126 14:00:05.506598 1046 log.go:172] (0xc000116000) (0xc000a06000) Create stream\nI0126 14:00:05.506618 1046 log.go:172] (0xc000116000) (0xc000a06000) Stream added, broadcasting: 5\nI0126 14:00:05.509737 1046 log.go:172] (0xc000116000) Reply frame received for 5\nI0126 14:00:05.683396 1046 log.go:172] (0xc000116000) Data frame received for 3\nI0126 14:00:05.683546 1046 log.go:172] (0xc000541ae0) (3) Data frame handling\nI0126 14:00:05.683578 1046 log.go:172] (0xc000541ae0) (3) Data frame sent\nI0126 14:00:05.806124 1046 log.go:172] (0xc000116000) (0xc000a06000) Stream removed, broadcasting: 5\nI0126 14:00:05.806319 1046 log.go:172] (0xc000116000) Data frame received for 1\nI0126 14:00:05.806395 1046 log.go:172] (0xc000116000) (0xc000541ae0) Stream removed, broadcasting: 3\nI0126 14:00:05.806447 1046 log.go:172] (0xc0007081e0) (1) Data frame handling\nI0126 14:00:05.806473 1046 log.go:172] (0xc0007081e0) (1) Data frame sent\nI0126 14:00:05.806481 1046 log.go:172] (0xc000116000) (0xc0007081e0) Stream removed, broadcasting: 1\nI0126 14:00:05.806490 1046 log.go:172] (0xc000116000) Go away received\nI0126 14:00:05.807798 1046 log.go:172] (0xc000116000) (0xc0007081e0) Stream removed, broadcasting: 1\nI0126 14:00:05.807830 1046 log.go:172] (0xc000116000) (0xc000541ae0) Stream removed, broadcasting: 3\nI0126 14:00:05.807842 1046 log.go:172] (0xc000116000) (0xc000a06000) Stream removed, broadcasting: 5\n" Jan 26 14:00:05.817: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:00:05.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6807" for this suite. Jan 26 14:00:11.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:00:12.035: INFO: namespace emptydir-6807 deletion completed in 6.208037999s • [SLOW TEST:18.833 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:00:12.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-51a165bd-158f-49ef-8b1f-f252848c9860 STEP: Creating a pod to test consume secrets Jan 26 14:00:12.150: INFO: Waiting up to 5m0s for pod "pod-secrets-eac2c3d1-14c2-422c-b843-978c55a7b2cd" in namespace "secrets-7193" to be "success or failure" Jan 26 14:00:12.223: INFO: Pod "pod-secrets-eac2c3d1-14c2-422c-b843-978c55a7b2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 72.401835ms Jan 26 14:00:14.233: INFO: Pod "pod-secrets-eac2c3d1-14c2-422c-b843-978c55a7b2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082634575s Jan 26 14:00:16.244: INFO: Pod "pod-secrets-eac2c3d1-14c2-422c-b843-978c55a7b2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09347635s Jan 26 14:00:18.253: INFO: Pod "pod-secrets-eac2c3d1-14c2-422c-b843-978c55a7b2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102157623s Jan 26 14:00:20.264: INFO: Pod "pod-secrets-eac2c3d1-14c2-422c-b843-978c55a7b2cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113686568s STEP: Saw pod success Jan 26 14:00:20.264: INFO: Pod "pod-secrets-eac2c3d1-14c2-422c-b843-978c55a7b2cd" satisfied condition "success or failure" Jan 26 14:00:20.269: INFO: Trying to get logs from node iruya-node pod pod-secrets-eac2c3d1-14c2-422c-b843-978c55a7b2cd container secret-volume-test: STEP: delete the pod Jan 26 14:00:20.344: INFO: Waiting for pod pod-secrets-eac2c3d1-14c2-422c-b843-978c55a7b2cd to disappear Jan 26 14:00:20.378: INFO: Pod pod-secrets-eac2c3d1-14c2-422c-b843-978c55a7b2cd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:00:20.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7193" for this suite. Jan 26 14:00:26.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:00:26.619: INFO: namespace secrets-7193 deletion completed in 6.23219775s • [SLOW TEST:14.584 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:00:26.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3103 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 26 14:00:26.705: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 26 14:00:59.015: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-3103 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 14:00:59.015: INFO: >>> kubeConfig: /root/.kube/config I0126 14:00:59.120235 8 log.go:172] (0xc00138c420) (0xc0024f43c0) Create stream I0126 14:00:59.120291 8 log.go:172] (0xc00138c420) (0xc0024f43c0) Stream added, broadcasting: 1 I0126 14:00:59.128521 8 log.go:172] (0xc00138c420) Reply frame received for 1 I0126 14:00:59.128568 8 log.go:172] (0xc00138c420) (0xc00240e1e0) Create stream I0126 14:00:59.128579 8 log.go:172] (0xc00138c420) (0xc00240e1e0) Stream added, broadcasting: 3 I0126 14:00:59.133451 8 log.go:172] (0xc00138c420) Reply frame received for 3 I0126 14:00:59.133504 8 log.go:172] (0xc00138c420) (0xc00240e280) Create stream I0126 14:00:59.133518 8 log.go:172] (0xc00138c420) (0xc00240e280) Stream added, broadcasting: 5 I0126 14:00:59.136100 8 log.go:172] (0xc00138c420) Reply frame received for 5 I0126 14:00:59.405118 8 log.go:172] (0xc00138c420) Data frame received for 3 I0126 14:00:59.405343 8 log.go:172] (0xc00240e1e0) (3) Data frame handling I0126 14:00:59.405401 8 log.go:172] (0xc00240e1e0) (3) Data frame sent I0126 14:00:59.555602 8 log.go:172] (0xc00138c420) Data frame received for 1 I0126 14:00:59.555757 8 log.go:172] (0xc00138c420) (0xc00240e1e0) Stream removed, broadcasting: 3 I0126 14:00:59.555860 8 log.go:172] (0xc0024f43c0) (1) Data frame handling I0126 14:00:59.555896 8 log.go:172] (0xc0024f43c0) (1) Data frame sent I0126 14:00:59.555918 8 log.go:172] (0xc00138c420) (0xc00240e280) Stream removed, broadcasting: 5 I0126 14:00:59.556014 8 log.go:172] (0xc00138c420) (0xc0024f43c0) Stream removed, broadcasting: 1 I0126 14:00:59.556047 8 log.go:172] (0xc00138c420) Go away received I0126 14:00:59.556402 8 log.go:172] (0xc00138c420) (0xc0024f43c0) Stream removed, broadcasting: 1 I0126 14:00:59.556456 8 log.go:172] (0xc00138c420) (0xc00240e1e0) Stream removed, broadcasting: 3 I0126 14:00:59.556508 8 log.go:172] (0xc00138c420) (0xc00240e280) Stream removed, broadcasting: 5 Jan 26 14:00:59.556: INFO: Waiting for endpoints: map[] Jan 26 14:00:59.562: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-3103 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 26 14:00:59.562: INFO: >>> kubeConfig: /root/.kube/config I0126 14:00:59.678564 8 log.go:172] (0xc002bac840) (0xc0024f83c0) Create stream I0126 14:00:59.678660 8 log.go:172] (0xc002bac840) (0xc0024f83c0) Stream added, broadcasting: 1 I0126 14:00:59.686464 8 log.go:172] (0xc002bac840) Reply frame received for 1 I0126 14:00:59.686564 8 log.go:172] (0xc002bac840) (0xc0023b4f00) Create stream I0126 14:00:59.686594 8 log.go:172] (0xc002bac840) (0xc0023b4f00) Stream added, broadcasting: 3 I0126 14:00:59.689290 8 log.go:172] (0xc002bac840) Reply frame received for 3 I0126 14:00:59.689328 8 log.go:172] (0xc002bac840) (0xc0023b4fa0) Create stream I0126 14:00:59.689343 8 log.go:172] (0xc002bac840) (0xc0023b4fa0) Stream added, broadcasting: 5 I0126 14:00:59.691364 8 log.go:172] (0xc002bac840) Reply frame received for 5 I0126 14:00:59.822958 8 log.go:172] (0xc002bac840) Data frame received for 3 I0126 14:00:59.823047 8 log.go:172] (0xc0023b4f00) (3) Data frame handling I0126 14:00:59.823068 8 log.go:172] (0xc0023b4f00) (3) Data frame sent I0126 14:01:00.044069 8 log.go:172] (0xc002bac840) (0xc0023b4f00) Stream removed, broadcasting: 3 I0126 14:01:00.044279 8 log.go:172] (0xc002bac840) Data frame received for 1 I0126 14:01:00.044316 8 log.go:172] (0xc0024f83c0) (1) Data frame handling I0126 14:01:00.044358 8 log.go:172] (0xc0024f83c0) (1) Data frame sent I0126 14:01:00.044474 8 log.go:172] (0xc002bac840) (0xc0023b4fa0) Stream removed, broadcasting: 5 I0126 14:01:00.044537 8 log.go:172] (0xc002bac840) (0xc0024f83c0) Stream removed, broadcasting: 1 I0126 14:01:00.044567 8 log.go:172] (0xc002bac840) Go away received I0126 14:01:00.044780 8 log.go:172] (0xc002bac840) (0xc0024f83c0) Stream removed, broadcasting: 1 I0126 14:01:00.044801 8 log.go:172] (0xc002bac840) (0xc0023b4f00) Stream removed, broadcasting: 3 I0126 14:01:00.044814 8 log.go:172] (0xc002bac840) (0xc0023b4fa0) Stream removed, broadcasting: 5 Jan 26 14:01:00.044: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:01:00.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3103" for this suite. Jan 26 14:01:24.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:01:24.442: INFO: namespace pod-network-test-3103 deletion completed in 24.387212278s • [SLOW TEST:57.822 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:01:24.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jan 26 14:01:24.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1926' Jan 26 14:01:24.980: INFO: stderr: "" Jan 26 14:01:24.980: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 26 14:01:24.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1926' Jan 26 14:01:25.228: INFO: stderr: "" Jan 26 14:01:25.228: INFO: stdout: "update-demo-nautilus-7sg8c update-demo-nautilus-clb2c " Jan 26 14:01:25.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sg8c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1926' Jan 26 14:01:25.398: INFO: stderr: "" Jan 26 14:01:25.398: INFO: stdout: "" Jan 26 14:01:25.398: INFO: update-demo-nautilus-7sg8c is created but not running Jan 26 14:01:30.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1926' Jan 26 14:01:31.401: INFO: stderr: "" Jan 26 14:01:31.401: INFO: stdout: "update-demo-nautilus-7sg8c update-demo-nautilus-clb2c " Jan 26 14:01:31.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sg8c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1926' Jan 26 14:01:32.053: INFO: stderr: "" Jan 26 14:01:32.053: INFO: stdout: "" Jan 26 14:01:32.054: INFO: update-demo-nautilus-7sg8c is created but not running Jan 26 14:01:37.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1926' Jan 26 14:01:37.191: INFO: stderr: "" Jan 26 14:01:37.191: INFO: stdout: "update-demo-nautilus-7sg8c update-demo-nautilus-clb2c " Jan 26 14:01:37.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sg8c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1926' Jan 26 14:01:37.284: INFO: stderr: "" Jan 26 14:01:37.284: INFO: stdout: "true" Jan 26 14:01:37.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sg8c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1926' Jan 26 14:01:37.456: INFO: stderr: "" Jan 26 14:01:37.456: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 26 14:01:37.456: INFO: validating pod update-demo-nautilus-7sg8c Jan 26 14:01:37.479: INFO: got data: { "image": "nautilus.jpg" } Jan 26 14:01:37.479: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 26 14:01:37.479: INFO: update-demo-nautilus-7sg8c is verified up and running Jan 26 14:01:37.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-clb2c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1926' Jan 26 14:01:37.623: INFO: stderr: "" Jan 26 14:01:37.623: INFO: stdout: "true" Jan 26 14:01:37.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-clb2c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1926' Jan 26 14:01:37.747: INFO: stderr: "" Jan 26 14:01:37.747: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 26 14:01:37.747: INFO: validating pod update-demo-nautilus-clb2c Jan 26 14:01:37.759: INFO: got data: { "image": "nautilus.jpg" } Jan 26 14:01:37.759: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 26 14:01:37.759: INFO: update-demo-nautilus-clb2c is verified up and running STEP: rolling-update to new replication controller Jan 26 14:01:37.761: INFO: scanned /root for discovery docs: Jan 26 14:01:37.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1926' Jan 26 14:02:08.692: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 26 14:02:08.692: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 26 14:02:08.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1926' Jan 26 14:02:08.808: INFO: stderr: "" Jan 26 14:02:08.808: INFO: stdout: "update-demo-kitten-5pctn update-demo-kitten-h6t7q " Jan 26 14:02:08.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5pctn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1926' Jan 26 14:02:08.936: INFO: stderr: "" Jan 26 14:02:08.936: INFO: stdout: "true" Jan 26 14:02:08.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5pctn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1926' Jan 26 14:02:09.087: INFO: stderr: "" Jan 26 14:02:09.087: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 26 14:02:09.087: INFO: validating pod update-demo-kitten-5pctn Jan 26 14:02:09.108: INFO: got data: { "image": "kitten.jpg" } Jan 26 14:02:09.108: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 26 14:02:09.108: INFO: update-demo-kitten-5pctn is verified up and running Jan 26 14:02:09.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h6t7q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1926' Jan 26 14:02:09.201: INFO: stderr: "" Jan 26 14:02:09.202: INFO: stdout: "true" Jan 26 14:02:09.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h6t7q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1926' Jan 26 14:02:09.302: INFO: stderr: "" Jan 26 14:02:09.302: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 26 14:02:09.302: INFO: validating pod update-demo-kitten-h6t7q Jan 26 14:02:09.313: INFO: got data: { "image": "kitten.jpg" } Jan 26 14:02:09.313: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 26 14:02:09.313: INFO: update-demo-kitten-h6t7q is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:02:09.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1926" for this suite. Jan 26 14:02:33.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:02:33.564: INFO: namespace kubectl-1926 deletion completed in 24.242994982s • [SLOW TEST:69.122 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:02:33.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 26 14:02:33.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4551' Jan 26 14:02:33.842: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 26 14:02:33.842: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 26 14:02:33.943: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-tbrnz] Jan 26 14:02:33.943: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-tbrnz" in namespace "kubectl-4551" to be "running and ready" Jan 26 14:02:33.947: INFO: Pod "e2e-test-nginx-rc-tbrnz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.536142ms Jan 26 14:02:35.955: INFO: Pod "e2e-test-nginx-rc-tbrnz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012021022s Jan 26 14:02:37.967: INFO: Pod "e2e-test-nginx-rc-tbrnz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02385547s Jan 26 14:02:39.976: INFO: Pod "e2e-test-nginx-rc-tbrnz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032714992s Jan 26 14:02:41.987: INFO: Pod "e2e-test-nginx-rc-tbrnz": Phase="Running", Reason="", readiness=true. Elapsed: 8.044212528s Jan 26 14:02:41.987: INFO: Pod "e2e-test-nginx-rc-tbrnz" satisfied condition "running and ready" Jan 26 14:02:41.987: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-tbrnz] Jan 26 14:02:41.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-4551' Jan 26 14:02:42.239: INFO: stderr: "" Jan 26 14:02:42.239: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jan 26 14:02:42.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4551' Jan 26 14:02:42.391: INFO: stderr: "" Jan 26 14:02:42.391: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:02:42.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4551" for this suite. Jan 26 14:03:04.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:03:04.601: INFO: namespace kubectl-4551 deletion completed in 22.202518221s • [SLOW TEST:31.037 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:03:04.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 26 14:03:04.786: INFO: Waiting up to 5m0s for pod "pod-5a223c68-cfc5-4353-b97c-7707419854b3" in namespace "emptydir-7530" to be "success or failure" Jan 26 14:03:04.810: INFO: Pod "pod-5a223c68-cfc5-4353-b97c-7707419854b3": Phase="Pending", Reason="", readiness=false. Elapsed: 23.892743ms Jan 26 14:03:06.818: INFO: Pod "pod-5a223c68-cfc5-4353-b97c-7707419854b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03142176s Jan 26 14:03:08.828: INFO: Pod "pod-5a223c68-cfc5-4353-b97c-7707419854b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041445577s Jan 26 14:03:10.840: INFO: Pod "pod-5a223c68-cfc5-4353-b97c-7707419854b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053534771s Jan 26 14:03:12.869: INFO: Pod "pod-5a223c68-cfc5-4353-b97c-7707419854b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083073156s STEP: Saw pod success Jan 26 14:03:12.869: INFO: Pod "pod-5a223c68-cfc5-4353-b97c-7707419854b3" satisfied condition "success or failure" Jan 26 14:03:12.879: INFO: Trying to get logs from node iruya-node pod pod-5a223c68-cfc5-4353-b97c-7707419854b3 container test-container: STEP: delete the pod Jan 26 14:03:12.959: INFO: Waiting for pod pod-5a223c68-cfc5-4353-b97c-7707419854b3 to disappear Jan 26 14:03:12.983: INFO: Pod pod-5a223c68-cfc5-4353-b97c-7707419854b3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:03:12.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7530" for this suite. Jan 26 14:03:19.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:03:19.172: INFO: namespace emptydir-7530 deletion completed in 6.181166714s • [SLOW TEST:14.569 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:03:19.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 26 14:03:19.311: INFO: Waiting up to 5m0s for pod "pod-84b6afa0-2c58-4e1d-9423-d427c361a84e" in namespace "emptydir-6417" to be "success or failure" Jan 26 14:03:19.322: INFO: Pod "pod-84b6afa0-2c58-4e1d-9423-d427c361a84e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.628973ms Jan 26 14:03:21.331: INFO: Pod "pod-84b6afa0-2c58-4e1d-9423-d427c361a84e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019942793s Jan 26 14:03:23.339: INFO: Pod "pod-84b6afa0-2c58-4e1d-9423-d427c361a84e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028354066s Jan 26 14:03:25.354: INFO: Pod "pod-84b6afa0-2c58-4e1d-9423-d427c361a84e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043206942s Jan 26 14:03:27.364: INFO: Pod "pod-84b6afa0-2c58-4e1d-9423-d427c361a84e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053006918s STEP: Saw pod success Jan 26 14:03:27.364: INFO: Pod "pod-84b6afa0-2c58-4e1d-9423-d427c361a84e" satisfied condition "success or failure" Jan 26 14:03:27.370: INFO: Trying to get logs from node iruya-node pod pod-84b6afa0-2c58-4e1d-9423-d427c361a84e container test-container: STEP: delete the pod Jan 26 14:03:27.430: INFO: Waiting for pod pod-84b6afa0-2c58-4e1d-9423-d427c361a84e to disappear Jan 26 14:03:27.435: INFO: Pod pod-84b6afa0-2c58-4e1d-9423-d427c361a84e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:03:27.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6417" for this suite. Jan 26 14:03:33.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:03:33.751: INFO: namespace emptydir-6417 deletion completed in 6.308227093s • [SLOW TEST:14.579 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:03:33.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-1b3f4070-9a23-48ec-aa01-9bc3e0c70b97 STEP: Creating a pod to test consume configMaps Jan 26 14:03:34.075: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f0442285-7e62-48ac-b63b-25b3f57a5cb9" in namespace "projected-696" to be "success or failure" Jan 26 14:03:34.082: INFO: Pod "pod-projected-configmaps-f0442285-7e62-48ac-b63b-25b3f57a5cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.659005ms Jan 26 14:03:36.091: INFO: Pod "pod-projected-configmaps-f0442285-7e62-48ac-b63b-25b3f57a5cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015364327s Jan 26 14:03:38.105: INFO: Pod "pod-projected-configmaps-f0442285-7e62-48ac-b63b-25b3f57a5cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029932749s Jan 26 14:03:40.121: INFO: Pod "pod-projected-configmaps-f0442285-7e62-48ac-b63b-25b3f57a5cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045687301s Jan 26 14:03:42.132: INFO: Pod "pod-projected-configmaps-f0442285-7e62-48ac-b63b-25b3f57a5cb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056522882s STEP: Saw pod success Jan 26 14:03:42.132: INFO: Pod "pod-projected-configmaps-f0442285-7e62-48ac-b63b-25b3f57a5cb9" satisfied condition "success or failure" Jan 26 14:03:42.138: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f0442285-7e62-48ac-b63b-25b3f57a5cb9 container projected-configmap-volume-test: STEP: delete the pod Jan 26 14:03:42.280: INFO: Waiting for pod pod-projected-configmaps-f0442285-7e62-48ac-b63b-25b3f57a5cb9 to disappear Jan 26 14:03:42.290: INFO: Pod pod-projected-configmaps-f0442285-7e62-48ac-b63b-25b3f57a5cb9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:03:42.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-696" for this suite. Jan 26 14:03:48.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:03:48.500: INFO: namespace projected-696 deletion completed in 6.203365368s • [SLOW TEST:14.747 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:03:48.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 26 14:03:48.672: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Jan 26 14:03:50.093: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 14:03:52.102: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 14:03:54.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 14:03:56.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 14:03:58.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715644229, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 26 14:04:00.965: INFO: Waited 851.959304ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:04:01.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2384" for this suite. Jan 26 14:04:07.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:04:07.804: INFO: namespace aggregator-2384 deletion completed in 6.301803339s • [SLOW TEST:19.303 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:04:07.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Jan 26 14:04:08.080: INFO: Waiting up to 5m0s for pod "client-containers-11fe4fde-8b5f-46f9-bdf0-8b6f98c20772" in namespace "containers-8979" to be "success or failure" Jan 26 14:04:08.086: INFO: Pod "client-containers-11fe4fde-8b5f-46f9-bdf0-8b6f98c20772": Phase="Pending", Reason="", readiness=false. Elapsed: 5.312041ms Jan 26 14:04:10.099: INFO: Pod "client-containers-11fe4fde-8b5f-46f9-bdf0-8b6f98c20772": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018175858s Jan 26 14:04:12.132: INFO: Pod "client-containers-11fe4fde-8b5f-46f9-bdf0-8b6f98c20772": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051618288s Jan 26 14:04:14.145: INFO: Pod "client-containers-11fe4fde-8b5f-46f9-bdf0-8b6f98c20772": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064137907s Jan 26 14:04:16.157: INFO: Pod "client-containers-11fe4fde-8b5f-46f9-bdf0-8b6f98c20772": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076143633s STEP: Saw pod success Jan 26 14:04:16.157: INFO: Pod "client-containers-11fe4fde-8b5f-46f9-bdf0-8b6f98c20772" satisfied condition "success or failure" Jan 26 14:04:16.162: INFO: Trying to get logs from node iruya-node pod client-containers-11fe4fde-8b5f-46f9-bdf0-8b6f98c20772 container test-container: STEP: delete the pod Jan 26 14:04:16.234: INFO: Waiting for pod client-containers-11fe4fde-8b5f-46f9-bdf0-8b6f98c20772 to disappear Jan 26 14:04:16.287: INFO: Pod client-containers-11fe4fde-8b5f-46f9-bdf0-8b6f98c20772 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:04:16.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8979" for this suite. Jan 26 14:04:22.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:04:22.520: INFO: namespace containers-8979 deletion completed in 6.226820448s • [SLOW TEST:14.716 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:04:22.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-095ff0b1-06ac-4c66-9ec3-970cc0dd2005 STEP: Creating a pod to test consume configMaps Jan 26 14:04:22.687: INFO: Waiting up to 5m0s for pod "pod-configmaps-b5985253-3403-4a35-8bd0-a5cceb921545" in namespace "configmap-5907" to be "success or failure" Jan 26 14:04:22.697: INFO: Pod "pod-configmaps-b5985253-3403-4a35-8bd0-a5cceb921545": Phase="Pending", Reason="", readiness=false. Elapsed: 9.221976ms Jan 26 14:04:24.708: INFO: Pod "pod-configmaps-b5985253-3403-4a35-8bd0-a5cceb921545": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020510501s Jan 26 14:04:26.717: INFO: Pod "pod-configmaps-b5985253-3403-4a35-8bd0-a5cceb921545": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03010359s Jan 26 14:04:28.725: INFO: Pod "pod-configmaps-b5985253-3403-4a35-8bd0-a5cceb921545": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037847695s Jan 26 14:04:30.739: INFO: Pod "pod-configmaps-b5985253-3403-4a35-8bd0-a5cceb921545": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051776637s STEP: Saw pod success Jan 26 14:04:30.739: INFO: Pod "pod-configmaps-b5985253-3403-4a35-8bd0-a5cceb921545" satisfied condition "success or failure" Jan 26 14:04:30.744: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b5985253-3403-4a35-8bd0-a5cceb921545 container configmap-volume-test: STEP: delete the pod Jan 26 14:04:30.785: INFO: Waiting for pod pod-configmaps-b5985253-3403-4a35-8bd0-a5cceb921545 to disappear Jan 26 14:04:30.834: INFO: Pod pod-configmaps-b5985253-3403-4a35-8bd0-a5cceb921545 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:04:30.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5907" for this suite. Jan 26 14:04:36.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:04:36.989: INFO: namespace configmap-5907 deletion completed in 6.149726036s • [SLOW TEST:14.468 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:04:36.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8666 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jan 26 14:04:37.195: INFO: Found 0 stateful pods, waiting for 3 Jan 26 14:04:47.201: INFO: Found 2 stateful pods, waiting for 3 Jan 26 14:04:57.213: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 26 14:04:57.213: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 26 14:04:57.213: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 26 14:05:07.220: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 26 14:05:07.221: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 26 14:05:07.221: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 26 14:05:07.276: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 26 14:05:17.379: INFO: Updating stateful set ss2 Jan 26 14:05:17.535: INFO: Waiting for Pod statefulset-8666/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 26 14:05:27.711: INFO: Found 2 stateful pods, waiting for 3 Jan 26 14:05:37.721: INFO: Found 2 stateful pods, waiting for 3 Jan 26 14:05:47.722: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 26 14:05:47.722: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 26 14:05:47.722: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 26 14:05:47.759: INFO: Updating stateful set ss2 Jan 26 14:05:47.769: INFO: Waiting for Pod statefulset-8666/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 26 14:05:57.803: INFO: Waiting for Pod statefulset-8666/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 26 14:06:07.817: INFO: Updating stateful set ss2 Jan 26 14:06:07.900: INFO: Waiting for StatefulSet statefulset-8666/ss2 to complete update Jan 26 14:06:07.900: INFO: Waiting for Pod statefulset-8666/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 26 14:06:17.920: INFO: Waiting for StatefulSet statefulset-8666/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 26 14:06:27.911: INFO: Deleting all statefulset in ns statefulset-8666 Jan 26 14:06:27.917: INFO: Scaling statefulset ss2 to 0 Jan 26 14:06:57.942: INFO: Waiting for statefulset status.replicas updated to 0 Jan 26 14:06:57.945: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:06:57.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8666" for this suite. Jan 26 14:07:06.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:07:06.108: INFO: namespace statefulset-8666 deletion completed in 8.126782961s • [SLOW TEST:149.118 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:07:06.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 26 14:07:06.345: INFO: Create a RollingUpdate DaemonSet Jan 26 14:07:06.350: INFO: Check that daemon pods launch on every node of the cluster Jan 26 14:07:06.422: INFO: Number of nodes with available pods: 0 Jan 26 14:07:06.422: INFO: Node iruya-node is running more than one daemon pod Jan 26 14:07:09.483: INFO: Number of nodes with available pods: 0 Jan 26 14:07:09.483: INFO: Node iruya-node is running more than one daemon pod Jan 26 14:07:10.438: INFO: Number of nodes with available pods: 0 Jan 26 14:07:10.438: INFO: Node iruya-node is running more than one daemon pod Jan 26 14:07:11.440: INFO: Number of nodes with available pods: 0 Jan 26 14:07:11.440: INFO: Node iruya-node is running more than one daemon pod Jan 26 14:07:13.576: INFO: Number of nodes with available pods: 0 Jan 26 14:07:13.576: INFO: Node iruya-node is running more than one daemon pod Jan 26 14:07:14.440: INFO: Number of nodes with available pods: 0 Jan 26 14:07:14.440: INFO: Node iruya-node is running more than one daemon pod Jan 26 14:07:15.456: INFO: Number of nodes with available pods: 0 Jan 26 14:07:15.456: INFO: Node iruya-node is running more than one daemon pod Jan 26 14:07:16.435: INFO: Number of nodes with available pods: 1 Jan 26 14:07:16.435: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 26 14:07:17.439: INFO: Number of nodes with available pods: 2 Jan 26 14:07:17.439: INFO: Number of running nodes: 2, number of available pods: 2 Jan 26 14:07:17.439: INFO: Update the DaemonSet to trigger a rollout Jan 26 14:07:17.470: INFO: Updating DaemonSet daemon-set Jan 26 14:07:23.501: INFO: Roll back the DaemonSet before rollout is complete Jan 26 14:07:23.509: INFO: Updating DaemonSet daemon-set Jan 26 14:07:23.509: INFO: Make sure DaemonSet rollback is complete Jan 26 14:07:23.523: INFO: Wrong image for pod: daemon-set-hxc78. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 26 14:07:23.523: INFO: Pod daemon-set-hxc78 is not available Jan 26 14:07:24.656: INFO: Wrong image for pod: daemon-set-hxc78. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 26 14:07:24.656: INFO: Pod daemon-set-hxc78 is not available Jan 26 14:07:25.657: INFO: Wrong image for pod: daemon-set-hxc78. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 26 14:07:25.657: INFO: Pod daemon-set-hxc78 is not available Jan 26 14:07:26.661: INFO: Wrong image for pod: daemon-set-hxc78. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 26 14:07:26.662: INFO: Pod daemon-set-hxc78 is not available Jan 26 14:07:27.667: INFO: Wrong image for pod: daemon-set-hxc78. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 26 14:07:27.667: INFO: Pod daemon-set-hxc78 is not available Jan 26 14:07:28.663: INFO: Pod daemon-set-vmtw4 is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5071, will wait for the garbage collector to delete the pods Jan 26 14:07:28.749: INFO: Deleting DaemonSet.extensions daemon-set took: 11.647423ms Jan 26 14:07:29.150: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.678362ms Jan 26 14:07:37.957: INFO: Number of nodes with available pods: 0 Jan 26 14:07:37.958: INFO: Number of running nodes: 0, number of available pods: 0 Jan 26 14:07:37.992: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5071/daemonsets","resourceVersion":"21944954"},"items":null} Jan 26 14:07:37.996: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5071/pods","resourceVersion":"21944954"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:07:38.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5071" for this suite. Jan 26 14:07:44.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:07:44.110: INFO: namespace daemonsets-5071 deletion completed in 6.101098976s • [SLOW TEST:38.002 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:07:44.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 26 14:07:44.250: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:07:58.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1999" for this suite. Jan 26 14:08:06.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:08:06.434: INFO: namespace init-container-1999 deletion completed in 8.19639562s • [SLOW TEST:22.323 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:08:06.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 26 14:08:06.503: INFO: Waiting up to 5m0s for pod "pod-c818d59a-cc64-4bfa-ba36-5261cdb90ede" in namespace "emptydir-4329" to be "success or failure" Jan 26 14:08:06.602: INFO: Pod "pod-c818d59a-cc64-4bfa-ba36-5261cdb90ede": Phase="Pending", Reason="", readiness=false. Elapsed: 99.814561ms Jan 26 14:08:08.619: INFO: Pod "pod-c818d59a-cc64-4bfa-ba36-5261cdb90ede": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116331999s Jan 26 14:08:10.626: INFO: Pod "pod-c818d59a-cc64-4bfa-ba36-5261cdb90ede": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123467008s Jan 26 14:08:12.636: INFO: Pod "pod-c818d59a-cc64-4bfa-ba36-5261cdb90ede": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132885829s Jan 26 14:08:14.649: INFO: Pod "pod-c818d59a-cc64-4bfa-ba36-5261cdb90ede": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.146570515s STEP: Saw pod success Jan 26 14:08:14.650: INFO: Pod "pod-c818d59a-cc64-4bfa-ba36-5261cdb90ede" satisfied condition "success or failure" Jan 26 14:08:14.656: INFO: Trying to get logs from node iruya-node pod pod-c818d59a-cc64-4bfa-ba36-5261cdb90ede container test-container: STEP: delete the pod Jan 26 14:08:14.742: INFO: Waiting for pod pod-c818d59a-cc64-4bfa-ba36-5261cdb90ede to disappear Jan 26 14:08:14.749: INFO: Pod pod-c818d59a-cc64-4bfa-ba36-5261cdb90ede no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:08:14.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4329" for this suite. Jan 26 14:08:20.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:08:20.938: INFO: namespace emptydir-4329 deletion completed in 6.183491788s • [SLOW TEST:14.504 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:08:20.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jan 26 14:08:21.112: INFO: Waiting up to 5m0s for pod "client-containers-6c601fef-399f-483d-a814-89379fede8fd" in namespace "containers-1712" to be "success or failure" Jan 26 14:08:21.123: INFO: Pod "client-containers-6c601fef-399f-483d-a814-89379fede8fd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.138508ms Jan 26 14:08:23.193: INFO: Pod "client-containers-6c601fef-399f-483d-a814-89379fede8fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081147932s Jan 26 14:08:25.199: INFO: Pod "client-containers-6c601fef-399f-483d-a814-89379fede8fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087277702s Jan 26 14:08:27.204: INFO: Pod "client-containers-6c601fef-399f-483d-a814-89379fede8fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091885371s Jan 26 14:08:29.275: INFO: Pod "client-containers-6c601fef-399f-483d-a814-89379fede8fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.162972594s STEP: Saw pod success Jan 26 14:08:29.275: INFO: Pod "client-containers-6c601fef-399f-483d-a814-89379fede8fd" satisfied condition "success or failure" Jan 26 14:08:29.279: INFO: Trying to get logs from node iruya-node pod client-containers-6c601fef-399f-483d-a814-89379fede8fd container test-container: STEP: delete the pod Jan 26 14:08:29.373: INFO: Waiting for pod client-containers-6c601fef-399f-483d-a814-89379fede8fd to disappear Jan 26 14:08:29.459: INFO: Pod client-containers-6c601fef-399f-483d-a814-89379fede8fd no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:08:29.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1712" for this suite. Jan 26 14:08:35.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:08:35.629: INFO: namespace containers-1712 deletion completed in 6.161539101s • [SLOW TEST:14.691 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:08:35.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-6780 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6780 to expose endpoints map[] Jan 26 14:08:35.824: INFO: Get endpoints failed (11.439988ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 26 14:08:36.832: INFO: successfully validated that service endpoint-test2 in namespace services-6780 exposes endpoints map[] (1.019483102s elapsed) STEP: Creating pod pod1 in namespace services-6780 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6780 to expose endpoints map[pod1:[80]] Jan 26 14:08:40.967: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.12408173s elapsed, will retry) Jan 26 14:08:45.020: INFO: successfully validated that service endpoint-test2 in namespace services-6780 exposes endpoints map[pod1:[80]] (8.177246249s elapsed) STEP: Creating pod pod2 in namespace services-6780 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6780 to expose endpoints map[pod1:[80] pod2:[80]] Jan 26 14:08:49.292: INFO: Unexpected endpoints: found map[240c8575-1129-4acd-9e52-218d7bd0283a:[80]], expected map[pod1:[80] pod2:[80]] (4.265794997s elapsed, will retry) Jan 26 14:08:52.602: INFO: successfully validated that service endpoint-test2 in namespace services-6780 exposes endpoints map[pod1:[80] pod2:[80]] (7.576374565s elapsed) STEP: Deleting pod pod1 in namespace services-6780 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6780 to expose endpoints map[pod2:[80]] Jan 26 14:08:53.742: INFO: successfully validated that service endpoint-test2 in namespace services-6780 exposes endpoints map[pod2:[80]] (1.131467689s elapsed) STEP: Deleting pod pod2 in namespace services-6780 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6780 to expose endpoints map[] Jan 26 14:08:54.862: INFO: successfully validated that service endpoint-test2 in namespace services-6780 exposes endpoints map[] (1.11191991s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:08:56.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6780" for this suite. Jan 26 14:09:02.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:09:02.448: INFO: namespace services-6780 deletion completed in 6.234767003s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:26.818 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:09:02.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:09:02.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9517" for this suite. Jan 26 14:09:08.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:09:08.725: INFO: namespace services-9517 deletion completed in 6.202715582s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.277 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:09:08.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-cbea0c59-b8d8-4c0d-bd1b-ec8727190fa1 STEP: Creating a pod to test consume secrets Jan 26 14:09:08.888: INFO: Waiting up to 5m0s for pod "pod-secrets-e8f57acd-407d-4f4a-903e-06869471f2f9" in namespace "secrets-1850" to be "success or failure" Jan 26 14:09:08.900: INFO: Pod "pod-secrets-e8f57acd-407d-4f4a-903e-06869471f2f9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.143271ms Jan 26 14:09:10.909: INFO: Pod "pod-secrets-e8f57acd-407d-4f4a-903e-06869471f2f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02054631s Jan 26 14:09:12.923: INFO: Pod "pod-secrets-e8f57acd-407d-4f4a-903e-06869471f2f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03491215s Jan 26 14:09:14.932: INFO: Pod "pod-secrets-e8f57acd-407d-4f4a-903e-06869471f2f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043617786s Jan 26 14:09:16.939: INFO: Pod "pod-secrets-e8f57acd-407d-4f4a-903e-06869471f2f9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050609038s Jan 26 14:09:18.945: INFO: Pod "pod-secrets-e8f57acd-407d-4f4a-903e-06869471f2f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056687428s STEP: Saw pod success Jan 26 14:09:18.945: INFO: Pod "pod-secrets-e8f57acd-407d-4f4a-903e-06869471f2f9" satisfied condition "success or failure" Jan 26 14:09:18.948: INFO: Trying to get logs from node iruya-node pod pod-secrets-e8f57acd-407d-4f4a-903e-06869471f2f9 container secret-volume-test: STEP: delete the pod Jan 26 14:09:19.202: INFO: Waiting for pod pod-secrets-e8f57acd-407d-4f4a-903e-06869471f2f9 to disappear Jan 26 14:09:19.219: INFO: Pod pod-secrets-e8f57acd-407d-4f4a-903e-06869471f2f9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 26 14:09:19.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1850" for this suite. Jan 26 14:09:25.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 26 14:09:25.400: INFO: namespace secrets-1850 deletion completed in 6.172244672s • [SLOW TEST:16.675 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 26 14:09:25.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 26 14:09:25.484: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 11.501156ms)
Jan 26 14:09:25.490: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.714565ms)
Jan 26 14:09:25.527: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 37.207624ms)
Jan 26 14:09:25.540: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.274021ms)
Jan 26 14:09:25.547: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.58155ms)
Jan 26 14:09:25.552: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.577662ms)
Jan 26 14:09:25.556: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.352068ms)
Jan 26 14:09:25.561: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.735157ms)
Jan 26 14:09:25.566: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.968217ms)
Jan 26 14:09:25.573: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.497431ms)
Jan 26 14:09:25.579: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.015632ms)
Jan 26 14:09:25.584: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.296284ms)
Jan 26 14:09:25.591: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.463858ms)
Jan 26 14:09:25.599: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.580786ms)
Jan 26 14:09:25.605: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.166995ms)
Jan 26 14:09:25.614: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.2349ms)
Jan 26 14:09:25.619: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.88293ms)
Jan 26 14:09:25.627: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.387945ms)
Jan 26 14:09:25.634: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.121132ms)
Jan 26 14:09:25.639: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.201142ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:09:25.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6625" for this suite.
Jan 26 14:09:31.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:09:31.831: INFO: namespace proxy-6625 deletion completed in 6.186179796s

• [SLOW TEST:6.430 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:09:31.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 26 14:09:31.967: INFO: Waiting up to 5m0s for pod "pod-1d1b8b40-c6f3-4828-a754-7e56aa0c01b5" in namespace "emptydir-328" to be "success or failure"
Jan 26 14:09:31.988: INFO: Pod "pod-1d1b8b40-c6f3-4828-a754-7e56aa0c01b5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.333915ms
Jan 26 14:09:33.999: INFO: Pod "pod-1d1b8b40-c6f3-4828-a754-7e56aa0c01b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032260972s
Jan 26 14:09:36.005: INFO: Pod "pod-1d1b8b40-c6f3-4828-a754-7e56aa0c01b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038420076s
Jan 26 14:09:38.014: INFO: Pod "pod-1d1b8b40-c6f3-4828-a754-7e56aa0c01b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047157192s
Jan 26 14:09:40.175: INFO: Pod "pod-1d1b8b40-c6f3-4828-a754-7e56aa0c01b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.208405713s
STEP: Saw pod success
Jan 26 14:09:40.175: INFO: Pod "pod-1d1b8b40-c6f3-4828-a754-7e56aa0c01b5" satisfied condition "success or failure"
Jan 26 14:09:40.178: INFO: Trying to get logs from node iruya-node pod pod-1d1b8b40-c6f3-4828-a754-7e56aa0c01b5 container test-container: 
STEP: delete the pod
Jan 26 14:09:40.326: INFO: Waiting for pod pod-1d1b8b40-c6f3-4828-a754-7e56aa0c01b5 to disappear
Jan 26 14:09:40.338: INFO: Pod pod-1d1b8b40-c6f3-4828-a754-7e56aa0c01b5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:09:40.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-328" for this suite.
Jan 26 14:09:46.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:09:46.525: INFO: namespace emptydir-328 deletion completed in 6.18100951s

• [SLOW TEST:14.694 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:09:46.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 26 14:09:53.803: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:09:53.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2074" for this suite.
Jan 26 14:09:59.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:10:00.124: INFO: namespace container-runtime-2074 deletion completed in 6.236554724s

• [SLOW TEST:13.598 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:10:00.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 26 14:10:00.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7859'
Jan 26 14:10:00.468: INFO: stderr: ""
Jan 26 14:10:00.468: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Jan 26 14:10:00.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7859'
Jan 26 14:10:06.585: INFO: stderr: ""
Jan 26 14:10:06.585: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:10:06.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7859" for this suite.
Jan 26 14:10:12.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:10:12.821: INFO: namespace kubectl-7859 deletion completed in 6.226282452s

• [SLOW TEST:12.697 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:10:12.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 26 14:10:12.955: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42c98597-abf1-48ba-9223-e7c162e2b99e" in namespace "projected-9402" to be "success or failure"
Jan 26 14:10:12.974: INFO: Pod "downwardapi-volume-42c98597-abf1-48ba-9223-e7c162e2b99e": Phase="Pending", Reason="", readiness=false. Elapsed: 19.057049ms
Jan 26 14:10:14.981: INFO: Pod "downwardapi-volume-42c98597-abf1-48ba-9223-e7c162e2b99e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025538695s
Jan 26 14:10:16.990: INFO: Pod "downwardapi-volume-42c98597-abf1-48ba-9223-e7c162e2b99e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035442382s
Jan 26 14:10:19.043: INFO: Pod "downwardapi-volume-42c98597-abf1-48ba-9223-e7c162e2b99e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087878655s
Jan 26 14:10:21.053: INFO: Pod "downwardapi-volume-42c98597-abf1-48ba-9223-e7c162e2b99e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.097672428s
STEP: Saw pod success
Jan 26 14:10:21.053: INFO: Pod "downwardapi-volume-42c98597-abf1-48ba-9223-e7c162e2b99e" satisfied condition "success or failure"
Jan 26 14:10:21.063: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-42c98597-abf1-48ba-9223-e7c162e2b99e container client-container: 
STEP: delete the pod
Jan 26 14:10:21.157: INFO: Waiting for pod downwardapi-volume-42c98597-abf1-48ba-9223-e7c162e2b99e to disappear
Jan 26 14:10:21.162: INFO: Pod downwardapi-volume-42c98597-abf1-48ba-9223-e7c162e2b99e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:10:21.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9402" for this suite.
Jan 26 14:10:27.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:10:27.404: INFO: namespace projected-9402 deletion completed in 6.141323424s

• [SLOW TEST:14.582 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:10:27.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 26 14:10:27.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3895'
Jan 26 14:10:29.438: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 26 14:10:29.438: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jan 26 14:10:31.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3895'
Jan 26 14:10:31.634: INFO: stderr: ""
Jan 26 14:10:31.634: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:10:31.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3895" for this suite.
Jan 26 14:10:37.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:10:37.918: INFO: namespace kubectl-3895 deletion completed in 6.244515032s

• [SLOW TEST:10.514 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:10:37.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-1152/configmap-test-b7bc11fc-d6f3-4999-823a-f78283c3662a
STEP: Creating a pod to test consume configMaps
Jan 26 14:10:38.037: INFO: Waiting up to 5m0s for pod "pod-configmaps-6010c7a7-afab-4c10-ab63-3ca7957176fb" in namespace "configmap-1152" to be "success or failure"
Jan 26 14:10:38.048: INFO: Pod "pod-configmaps-6010c7a7-afab-4c10-ab63-3ca7957176fb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.531629ms
Jan 26 14:10:40.056: INFO: Pod "pod-configmaps-6010c7a7-afab-4c10-ab63-3ca7957176fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01904422s
Jan 26 14:10:42.063: INFO: Pod "pod-configmaps-6010c7a7-afab-4c10-ab63-3ca7957176fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026373486s
Jan 26 14:10:44.076: INFO: Pod "pod-configmaps-6010c7a7-afab-4c10-ab63-3ca7957176fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038781129s
Jan 26 14:10:46.086: INFO: Pod "pod-configmaps-6010c7a7-afab-4c10-ab63-3ca7957176fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048670519s
STEP: Saw pod success
Jan 26 14:10:46.086: INFO: Pod "pod-configmaps-6010c7a7-afab-4c10-ab63-3ca7957176fb" satisfied condition "success or failure"
Jan 26 14:10:46.091: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6010c7a7-afab-4c10-ab63-3ca7957176fb container env-test: 
STEP: delete the pod
Jan 26 14:10:46.293: INFO: Waiting for pod pod-configmaps-6010c7a7-afab-4c10-ab63-3ca7957176fb to disappear
Jan 26 14:10:46.306: INFO: Pod pod-configmaps-6010c7a7-afab-4c10-ab63-3ca7957176fb no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:10:46.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1152" for this suite.
Jan 26 14:10:52.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:10:52.483: INFO: namespace configmap-1152 deletion completed in 6.168357505s

• [SLOW TEST:14.564 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:10:52.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-c1f0819f-9cb8-4b96-83e6-250a40836fa8
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-c1f0819f-9cb8-4b96-83e6-250a40836fa8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:11:02.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5539" for this suite.
Jan 26 14:11:24.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:11:24.972: INFO: namespace projected-5539 deletion completed in 22.196122116s

• [SLOW TEST:32.487 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:11:24.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 26 14:11:51.132: INFO: Container started at 2020-01-26 14:11:30 +0000 UTC, pod became ready at 2020-01-26 14:11:49 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:11:51.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6506" for this suite.
Jan 26 14:12:13.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:12:13.274: INFO: namespace container-probe-6506 deletion completed in 22.131677583s

• [SLOW TEST:48.301 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:12:13.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-94d17c09-9662-43dd-9b51-10e40de0d5b0
STEP: Creating a pod to test consume configMaps
Jan 26 14:12:13.384: INFO: Waiting up to 5m0s for pod "pod-configmaps-e0de50f9-3f55-4c12-8e87-cd73c3c8b775" in namespace "configmap-6279" to be "success or failure"
Jan 26 14:12:13.387: INFO: Pod "pod-configmaps-e0de50f9-3f55-4c12-8e87-cd73c3c8b775": Phase="Pending", Reason="", readiness=false. Elapsed: 3.366438ms
Jan 26 14:12:15.397: INFO: Pod "pod-configmaps-e0de50f9-3f55-4c12-8e87-cd73c3c8b775": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012863617s
Jan 26 14:12:17.402: INFO: Pod "pod-configmaps-e0de50f9-3f55-4c12-8e87-cd73c3c8b775": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01830722s
Jan 26 14:12:19.414: INFO: Pod "pod-configmaps-e0de50f9-3f55-4c12-8e87-cd73c3c8b775": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030648809s
Jan 26 14:12:21.420: INFO: Pod "pod-configmaps-e0de50f9-3f55-4c12-8e87-cd73c3c8b775": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036467667s
Jan 26 14:12:23.444: INFO: Pod "pod-configmaps-e0de50f9-3f55-4c12-8e87-cd73c3c8b775": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060079545s
STEP: Saw pod success
Jan 26 14:12:23.444: INFO: Pod "pod-configmaps-e0de50f9-3f55-4c12-8e87-cd73c3c8b775" satisfied condition "success or failure"
Jan 26 14:12:23.451: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e0de50f9-3f55-4c12-8e87-cd73c3c8b775 container configmap-volume-test: 
STEP: delete the pod
Jan 26 14:12:23.731: INFO: Waiting for pod pod-configmaps-e0de50f9-3f55-4c12-8e87-cd73c3c8b775 to disappear
Jan 26 14:12:23.741: INFO: Pod pod-configmaps-e0de50f9-3f55-4c12-8e87-cd73c3c8b775 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:12:23.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6279" for this suite.
Jan 26 14:12:29.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:12:29.954: INFO: namespace configmap-6279 deletion completed in 6.204221098s

• [SLOW TEST:16.680 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:12:29.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 26 14:12:30.080: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92081d7f-c04c-42c3-bb60-6cafb7e29ae9" in namespace "downward-api-349" to be "success or failure"
Jan 26 14:12:30.099: INFO: Pod "downwardapi-volume-92081d7f-c04c-42c3-bb60-6cafb7e29ae9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.103373ms
Jan 26 14:12:32.146: INFO: Pod "downwardapi-volume-92081d7f-c04c-42c3-bb60-6cafb7e29ae9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065707244s
Jan 26 14:12:34.152: INFO: Pod "downwardapi-volume-92081d7f-c04c-42c3-bb60-6cafb7e29ae9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071778115s
Jan 26 14:12:36.163: INFO: Pod "downwardapi-volume-92081d7f-c04c-42c3-bb60-6cafb7e29ae9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083055774s
Jan 26 14:12:38.182: INFO: Pod "downwardapi-volume-92081d7f-c04c-42c3-bb60-6cafb7e29ae9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101225518s
STEP: Saw pod success
Jan 26 14:12:38.182: INFO: Pod "downwardapi-volume-92081d7f-c04c-42c3-bb60-6cafb7e29ae9" satisfied condition "success or failure"
Jan 26 14:12:38.187: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-92081d7f-c04c-42c3-bb60-6cafb7e29ae9 container client-container: 
STEP: delete the pod
Jan 26 14:12:38.304: INFO: Waiting for pod downwardapi-volume-92081d7f-c04c-42c3-bb60-6cafb7e29ae9 to disappear
Jan 26 14:12:38.323: INFO: Pod downwardapi-volume-92081d7f-c04c-42c3-bb60-6cafb7e29ae9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:12:38.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-349" for this suite.
Jan 26 14:12:44.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:12:44.565: INFO: namespace downward-api-349 deletion completed in 6.214384908s

• [SLOW TEST:14.610 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:12:44.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-26b7d768-f531-497c-aa7f-82d32804d2f1
STEP: Creating a pod to test consume secrets
Jan 26 14:12:44.681: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cf9f5a94-f250-4e2e-955a-23551db3f01a" in namespace "projected-7196" to be "success or failure"
Jan 26 14:12:44.687: INFO: Pod "pod-projected-secrets-cf9f5a94-f250-4e2e-955a-23551db3f01a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.997419ms
Jan 26 14:12:46.704: INFO: Pod "pod-projected-secrets-cf9f5a94-f250-4e2e-955a-23551db3f01a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023692313s
Jan 26 14:12:48.712: INFO: Pod "pod-projected-secrets-cf9f5a94-f250-4e2e-955a-23551db3f01a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031571806s
Jan 26 14:12:50.719: INFO: Pod "pod-projected-secrets-cf9f5a94-f250-4e2e-955a-23551db3f01a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037837776s
Jan 26 14:12:52.725: INFO: Pod "pod-projected-secrets-cf9f5a94-f250-4e2e-955a-23551db3f01a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044662809s
STEP: Saw pod success
Jan 26 14:12:52.725: INFO: Pod "pod-projected-secrets-cf9f5a94-f250-4e2e-955a-23551db3f01a" satisfied condition "success or failure"
Jan 26 14:12:52.729: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-cf9f5a94-f250-4e2e-955a-23551db3f01a container projected-secret-volume-test: 
STEP: delete the pod
Jan 26 14:12:52.943: INFO: Waiting for pod pod-projected-secrets-cf9f5a94-f250-4e2e-955a-23551db3f01a to disappear
Jan 26 14:12:52.954: INFO: Pod pod-projected-secrets-cf9f5a94-f250-4e2e-955a-23551db3f01a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:12:52.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7196" for this suite.
Jan 26 14:12:59.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:12:59.217: INFO: namespace projected-7196 deletion completed in 6.252856437s

• [SLOW TEST:14.650 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:12:59.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 26 14:13:07.443: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-4db5695d-8cc0-4933-9586-81cd0d6aa949,GenerateName:,Namespace:events-3464,SelfLink:/api/v1/namespaces/events-3464/pods/send-events-4db5695d-8cc0-4933-9586-81cd0d6aa949,UID:6c50b06e-e8df-4489-ac9f-e3b62d45203a,ResourceVersion:21945858,Generation:0,CreationTimestamp:2020-01-26 14:12:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 381163885,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-msr2k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-msr2k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-msr2k true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0039b44a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0039b44c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:12:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:13:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:13:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:12:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-26 14:12:59 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-26 14:13:05 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://b3628af6832cd0d778fdcfb6f1b7965f9ba1d1265182b74b515889866c2a1c57}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 26 14:13:09.456: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 26 14:13:11.465: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:13:11.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3464" for this suite.
Jan 26 14:13:57.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:13:57.724: INFO: namespace events-3464 deletion completed in 46.156210219s

• [SLOW TEST:58.507 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:13:57.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 26 14:13:57.920: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 26 14:14:02.929: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:14:04.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6331" for this suite.
Jan 26 14:14:10.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:14:10.335: INFO: namespace replication-controller-6331 deletion completed in 6.188083576s

• [SLOW TEST:12.610 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:14:10.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-0176d708-bc42-4f2a-903c-61675c9bc83b
STEP: Creating a pod to test consume configMaps
Jan 26 14:14:10.577: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2e981d7d-955b-4934-a47e-c9680bda8099" in namespace "projected-7634" to be "success or failure"
Jan 26 14:14:10.617: INFO: Pod "pod-projected-configmaps-2e981d7d-955b-4934-a47e-c9680bda8099": Phase="Pending", Reason="", readiness=false. Elapsed: 40.331285ms
Jan 26 14:14:12.632: INFO: Pod "pod-projected-configmaps-2e981d7d-955b-4934-a47e-c9680bda8099": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055036844s
Jan 26 14:14:14.649: INFO: Pod "pod-projected-configmaps-2e981d7d-955b-4934-a47e-c9680bda8099": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072263528s
Jan 26 14:14:16.659: INFO: Pod "pod-projected-configmaps-2e981d7d-955b-4934-a47e-c9680bda8099": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081574241s
Jan 26 14:14:18.666: INFO: Pod "pod-projected-configmaps-2e981d7d-955b-4934-a47e-c9680bda8099": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088741184s
Jan 26 14:14:20.682: INFO: Pod "pod-projected-configmaps-2e981d7d-955b-4934-a47e-c9680bda8099": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105557136s
STEP: Saw pod success
Jan 26 14:14:20.683: INFO: Pod "pod-projected-configmaps-2e981d7d-955b-4934-a47e-c9680bda8099" satisfied condition "success or failure"
Jan 26 14:14:20.690: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2e981d7d-955b-4934-a47e-c9680bda8099 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 26 14:14:20.893: INFO: Waiting for pod pod-projected-configmaps-2e981d7d-955b-4934-a47e-c9680bda8099 to disappear
Jan 26 14:14:20.899: INFO: Pod pod-projected-configmaps-2e981d7d-955b-4934-a47e-c9680bda8099 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:14:20.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7634" for this suite.
Jan 26 14:14:28.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:14:29.131: INFO: namespace projected-7634 deletion completed in 8.224119001s

• [SLOW TEST:18.796 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:14:29.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-2d7ef83c-b1b2-45e9-b920-f11bdf61f528
STEP: Creating a pod to test consume configMaps
Jan 26 14:14:29.255: INFO: Waiting up to 5m0s for pod "pod-configmaps-18435bad-d71a-4ea6-a493-13015ec7f62c" in namespace "configmap-8786" to be "success or failure"
Jan 26 14:14:29.300: INFO: Pod "pod-configmaps-18435bad-d71a-4ea6-a493-13015ec7f62c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.457754ms
Jan 26 14:14:31.309: INFO: Pod "pod-configmaps-18435bad-d71a-4ea6-a493-13015ec7f62c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05374439s
Jan 26 14:14:33.322: INFO: Pod "pod-configmaps-18435bad-d71a-4ea6-a493-13015ec7f62c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066648287s
Jan 26 14:14:35.334: INFO: Pod "pod-configmaps-18435bad-d71a-4ea6-a493-13015ec7f62c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078650713s
Jan 26 14:14:37.345: INFO: Pod "pod-configmaps-18435bad-d71a-4ea6-a493-13015ec7f62c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089170241s
STEP: Saw pod success
Jan 26 14:14:37.345: INFO: Pod "pod-configmaps-18435bad-d71a-4ea6-a493-13015ec7f62c" satisfied condition "success or failure"
Jan 26 14:14:37.357: INFO: Trying to get logs from node iruya-node pod pod-configmaps-18435bad-d71a-4ea6-a493-13015ec7f62c container configmap-volume-test: 
STEP: delete the pod
Jan 26 14:14:37.410: INFO: Waiting for pod pod-configmaps-18435bad-d71a-4ea6-a493-13015ec7f62c to disappear
Jan 26 14:14:37.417: INFO: Pod pod-configmaps-18435bad-d71a-4ea6-a493-13015ec7f62c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:14:37.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8786" for this suite.
Jan 26 14:14:43.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:14:43.606: INFO: namespace configmap-8786 deletion completed in 6.179311259s

• [SLOW TEST:14.474 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:14:43.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan 26 14:14:43.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2374'
Jan 26 14:14:44.200: INFO: stderr: ""
Jan 26 14:14:44.200: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 26 14:14:44.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2374'
Jan 26 14:14:44.381: INFO: stderr: ""
Jan 26 14:14:44.381: INFO: stdout: "update-demo-nautilus-drnsh update-demo-nautilus-frtp4 "
Jan 26 14:14:44.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drnsh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2374'
Jan 26 14:14:44.542: INFO: stderr: ""
Jan 26 14:14:44.542: INFO: stdout: ""
Jan 26 14:14:44.542: INFO: update-demo-nautilus-drnsh is created but not running
Jan 26 14:14:49.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2374'
Jan 26 14:14:49.868: INFO: stderr: ""
Jan 26 14:14:49.868: INFO: stdout: "update-demo-nautilus-drnsh update-demo-nautilus-frtp4 "
Jan 26 14:14:49.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drnsh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2374'
Jan 26 14:14:49.979: INFO: stderr: ""
Jan 26 14:14:49.979: INFO: stdout: ""
Jan 26 14:14:49.979: INFO: update-demo-nautilus-drnsh is created but not running
Jan 26 14:14:54.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2374'
Jan 26 14:14:55.160: INFO: stderr: ""
Jan 26 14:14:55.160: INFO: stdout: "update-demo-nautilus-drnsh update-demo-nautilus-frtp4 "
Jan 26 14:14:55.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drnsh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2374'
Jan 26 14:14:55.273: INFO: stderr: ""
Jan 26 14:14:55.273: INFO: stdout: "true"
Jan 26 14:14:55.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drnsh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2374'
Jan 26 14:14:55.467: INFO: stderr: ""
Jan 26 14:14:55.467: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 14:14:55.467: INFO: validating pod update-demo-nautilus-drnsh
Jan 26 14:14:55.475: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 14:14:55.475: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 14:14:55.475: INFO: update-demo-nautilus-drnsh is verified up and running
Jan 26 14:14:55.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-frtp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2374'
Jan 26 14:14:55.628: INFO: stderr: ""
Jan 26 14:14:55.628: INFO: stdout: "true"
Jan 26 14:14:55.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-frtp4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2374'
Jan 26 14:14:55.736: INFO: stderr: ""
Jan 26 14:14:55.736: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 14:14:55.736: INFO: validating pod update-demo-nautilus-frtp4
Jan 26 14:14:55.761: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 14:14:55.761: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 14:14:55.761: INFO: update-demo-nautilus-frtp4 is verified up and running
STEP: scaling down the replication controller
Jan 26 14:14:55.764: INFO: scanned /root for discovery docs: 
Jan 26 14:14:55.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2374'
Jan 26 14:14:56.922: INFO: stderr: ""
Jan 26 14:14:56.922: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 26 14:14:56.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2374'
Jan 26 14:14:57.096: INFO: stderr: ""
Jan 26 14:14:57.096: INFO: stdout: "update-demo-nautilus-drnsh update-demo-nautilus-frtp4 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 26 14:15:02.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2374'
Jan 26 14:15:02.277: INFO: stderr: ""
Jan 26 14:15:02.277: INFO: stdout: "update-demo-nautilus-drnsh "
Jan 26 14:15:02.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drnsh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2374'
Jan 26 14:15:02.398: INFO: stderr: ""
Jan 26 14:15:02.398: INFO: stdout: "true"
Jan 26 14:15:02.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drnsh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2374'
Jan 26 14:15:02.529: INFO: stderr: ""
Jan 26 14:15:02.529: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 14:15:02.529: INFO: validating pod update-demo-nautilus-drnsh
Jan 26 14:15:02.538: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 14:15:02.538: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 14:15:02.538: INFO: update-demo-nautilus-drnsh is verified up and running
STEP: scaling up the replication controller
Jan 26 14:15:02.541: INFO: scanned /root for discovery docs: 
Jan 26 14:15:02.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2374'
Jan 26 14:15:03.704: INFO: stderr: ""
Jan 26 14:15:03.704: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 26 14:15:03.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2374'
Jan 26 14:15:03.853: INFO: stderr: ""
Jan 26 14:15:03.853: INFO: stdout: "update-demo-nautilus-2kn8d update-demo-nautilus-drnsh "
Jan 26 14:15:03.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kn8d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2374'
Jan 26 14:15:04.038: INFO: stderr: ""
Jan 26 14:15:04.038: INFO: stdout: ""
Jan 26 14:15:04.038: INFO: update-demo-nautilus-2kn8d is created but not running
Jan 26 14:15:09.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2374'
Jan 26 14:15:09.192: INFO: stderr: ""
Jan 26 14:15:09.192: INFO: stdout: "update-demo-nautilus-2kn8d update-demo-nautilus-drnsh "
Jan 26 14:15:09.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kn8d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2374'
Jan 26 14:15:09.290: INFO: stderr: ""
Jan 26 14:15:09.290: INFO: stdout: "true"
Jan 26 14:15:09.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2kn8d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2374'
Jan 26 14:15:09.379: INFO: stderr: ""
Jan 26 14:15:09.379: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 14:15:09.379: INFO: validating pod update-demo-nautilus-2kn8d
Jan 26 14:15:09.395: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 14:15:09.395: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 14:15:09.395: INFO: update-demo-nautilus-2kn8d is verified up and running
Jan 26 14:15:09.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drnsh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2374'
Jan 26 14:15:09.525: INFO: stderr: ""
Jan 26 14:15:09.525: INFO: stdout: "true"
Jan 26 14:15:09.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drnsh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2374'
Jan 26 14:15:09.617: INFO: stderr: ""
Jan 26 14:15:09.617: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 14:15:09.617: INFO: validating pod update-demo-nautilus-drnsh
Jan 26 14:15:09.623: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 14:15:09.623: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 14:15:09.623: INFO: update-demo-nautilus-drnsh is verified up and running
STEP: using delete to clean up resources
Jan 26 14:15:09.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2374'
Jan 26 14:15:09.733: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 14:15:09.733: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 26 14:15:09.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2374'
Jan 26 14:15:09.843: INFO: stderr: "No resources found.\n"
Jan 26 14:15:09.843: INFO: stdout: ""
Jan 26 14:15:09.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2374 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 26 14:15:10.013: INFO: stderr: ""
Jan 26 14:15:10.013: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:15:10.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2374" for this suite.
Jan 26 14:15:32.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:15:32.210: INFO: namespace kubectl-2374 deletion completed in 22.190603872s

• [SLOW TEST:48.600 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:15:32.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 26 14:15:32.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4409'
Jan 26 14:15:32.668: INFO: stderr: ""
Jan 26 14:15:32.669: INFO: stdout: "replicationcontroller/redis-master created\n"
Jan 26 14:15:32.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4409'
Jan 26 14:15:33.085: INFO: stderr: ""
Jan 26 14:15:33.085: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 26 14:15:34.100: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 14:15:34.100: INFO: Found 0 / 1
Jan 26 14:15:35.095: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 14:15:35.095: INFO: Found 0 / 1
Jan 26 14:15:36.103: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 14:15:36.103: INFO: Found 0 / 1
Jan 26 14:15:37.099: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 14:15:37.099: INFO: Found 0 / 1
Jan 26 14:15:38.148: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 14:15:38.148: INFO: Found 0 / 1
Jan 26 14:15:39.105: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 14:15:39.105: INFO: Found 0 / 1
Jan 26 14:15:40.101: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 14:15:40.101: INFO: Found 1 / 1
Jan 26 14:15:40.102: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 26 14:15:40.109: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 14:15:40.109: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 26 14:15:40.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-s7dqp --namespace=kubectl-4409'
Jan 26 14:15:40.299: INFO: stderr: ""
Jan 26 14:15:40.300: INFO: stdout: "Name:           redis-master-s7dqp\nNamespace:      kubectl-4409\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Sun, 26 Jan 2020 14:15:32 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://41c1679f47e022a7106b346ef2dd2092d4e35f615383a573e47e126f9ab634d8\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 26 Jan 2020 14:15:38 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-p5bvn (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-p5bvn:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-p5bvn\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  8s    default-scheduler    Successfully assigned kubectl-4409/redis-master-s7dqp to iruya-node\n  Normal  Pulled     4s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Jan 26 14:15:40.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-4409'
Jan 26 14:15:40.432: INFO: stderr: ""
Jan 26 14:15:40.432: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-4409\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: redis-master-s7dqp\n"
Jan 26 14:15:40.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-4409'
Jan 26 14:15:40.583: INFO: stderr: ""
Jan 26 14:15:40.583: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-4409\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.101.190.175\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan 26 14:15:40.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Jan 26 14:15:40.766: INFO: stderr: ""
Jan 26 14:15:40.766: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sun, 26 Jan 2020 14:15:37 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sun, 26 Jan 2020 14:15:37 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sun, 26 Jan 2020 14:15:37 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sun, 26 Jan 2020 14:15:37 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         175d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         106d\n  kubectl-4409               redis-master-s7dqp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan 26 14:15:40.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4409'
Jan 26 14:15:40.882: INFO: stderr: ""
Jan 26 14:15:40.882: INFO: stdout: "Name:         kubectl-4409\nLabels:       e2e-framework=kubectl\n              e2e-run=858f472d-16d0-408e-84a4-6ce7a839b4ac\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:15:40.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4409" for this suite.
Jan 26 14:16:02.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:16:03.040: INFO: namespace kubectl-4409 deletion completed in 22.153408039s

• [SLOW TEST:30.829 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:16:03.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0126 14:16:06.529796       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 26 14:16:06.529: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:16:06.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6398" for this suite.
Jan 26 14:16:13.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:16:13.082: INFO: namespace gc-6398 deletion completed in 6.545915821s

• [SLOW TEST:10.042 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:16:13.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0126 14:16:26.574176       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 26 14:16:26.574: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:16:26.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6165" for this suite.
Jan 26 14:16:37.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:16:37.377: INFO: namespace gc-6165 deletion completed in 10.797059836s

• [SLOW TEST:24.295 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:16:37.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-d99dd16d-0ac0-4557-80d9-b2c0447cc1ba
STEP: Creating secret with name secret-projected-all-test-volume-4cd088e2-3c41-433c-9761-3ca385f3499e
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 26 14:16:37.620: INFO: Waiting up to 5m0s for pod "projected-volume-28893854-0be1-432d-b77a-d4a11a1f3b20" in namespace "projected-4693" to be "success or failure"
Jan 26 14:16:37.639: INFO: Pod "projected-volume-28893854-0be1-432d-b77a-d4a11a1f3b20": Phase="Pending", Reason="", readiness=false. Elapsed: 18.476194ms
Jan 26 14:16:40.656: INFO: Pod "projected-volume-28893854-0be1-432d-b77a-d4a11a1f3b20": Phase="Pending", Reason="", readiness=false. Elapsed: 3.035471122s
Jan 26 14:16:43.398: INFO: Pod "projected-volume-28893854-0be1-432d-b77a-d4a11a1f3b20": Phase="Pending", Reason="", readiness=false. Elapsed: 5.778318888s
Jan 26 14:16:45.407: INFO: Pod "projected-volume-28893854-0be1-432d-b77a-d4a11a1f3b20": Phase="Pending", Reason="", readiness=false. Elapsed: 7.786998341s
Jan 26 14:16:47.417: INFO: Pod "projected-volume-28893854-0be1-432d-b77a-d4a11a1f3b20": Phase="Pending", Reason="", readiness=false. Elapsed: 9.797347699s
Jan 26 14:16:49.429: INFO: Pod "projected-volume-28893854-0be1-432d-b77a-d4a11a1f3b20": Phase="Pending", Reason="", readiness=false. Elapsed: 11.808584074s
Jan 26 14:16:51.437: INFO: Pod "projected-volume-28893854-0be1-432d-b77a-d4a11a1f3b20": Phase="Pending", Reason="", readiness=false. Elapsed: 13.817101917s
Jan 26 14:16:53.444: INFO: Pod "projected-volume-28893854-0be1-432d-b77a-d4a11a1f3b20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.823447012s
STEP: Saw pod success
Jan 26 14:16:53.444: INFO: Pod "projected-volume-28893854-0be1-432d-b77a-d4a11a1f3b20" satisfied condition "success or failure"
Jan 26 14:16:53.447: INFO: Trying to get logs from node iruya-node pod projected-volume-28893854-0be1-432d-b77a-d4a11a1f3b20 container projected-all-volume-test: 
STEP: delete the pod
Jan 26 14:16:53.524: INFO: Waiting for pod projected-volume-28893854-0be1-432d-b77a-d4a11a1f3b20 to disappear
Jan 26 14:16:53.551: INFO: Pod projected-volume-28893854-0be1-432d-b77a-d4a11a1f3b20 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:16:53.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4693" for this suite.
Jan 26 14:16:59.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:16:59.807: INFO: namespace projected-4693 deletion completed in 6.194321603s

• [SLOW TEST:22.429 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:16:59.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:17:09.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3605" for this suite.
Jan 26 14:18:02.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:18:02.262: INFO: namespace kubelet-test-3605 deletion completed in 52.267820912s

• [SLOW TEST:62.454 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:18:02.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 26 14:18:13.062: INFO: Successfully updated pod "annotationupdate833293fb-b49e-4a9b-b327-ca9f6da18ba8"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:18:15.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4179" for this suite.
Jan 26 14:18:37.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:18:37.331: INFO: namespace projected-4179 deletion completed in 22.172820429s

• [SLOW TEST:35.069 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:18:37.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 26 14:18:37.438: INFO: Waiting up to 5m0s for pod "pod-0ad5bff5-6049-4694-993f-9cb171da8e04" in namespace "emptydir-1347" to be "success or failure"
Jan 26 14:18:37.442: INFO: Pod "pod-0ad5bff5-6049-4694-993f-9cb171da8e04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141564ms
Jan 26 14:18:39.453: INFO: Pod "pod-0ad5bff5-6049-4694-993f-9cb171da8e04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014583343s
Jan 26 14:18:41.468: INFO: Pod "pod-0ad5bff5-6049-4694-993f-9cb171da8e04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029906327s
Jan 26 14:18:43.475: INFO: Pod "pod-0ad5bff5-6049-4694-993f-9cb171da8e04": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037080813s
Jan 26 14:18:45.528: INFO: Pod "pod-0ad5bff5-6049-4694-993f-9cb171da8e04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089663768s
STEP: Saw pod success
Jan 26 14:18:45.528: INFO: Pod "pod-0ad5bff5-6049-4694-993f-9cb171da8e04" satisfied condition "success or failure"
Jan 26 14:18:45.531: INFO: Trying to get logs from node iruya-node pod pod-0ad5bff5-6049-4694-993f-9cb171da8e04 container test-container: 
STEP: delete the pod
Jan 26 14:18:45.702: INFO: Waiting for pod pod-0ad5bff5-6049-4694-993f-9cb171da8e04 to disappear
Jan 26 14:18:45.712: INFO: Pod pod-0ad5bff5-6049-4694-993f-9cb171da8e04 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:18:45.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1347" for this suite.
Jan 26 14:18:51.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:18:51.907: INFO: namespace emptydir-1347 deletion completed in 6.182319344s

• [SLOW TEST:14.576 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:18:51.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 26 14:18:52.051: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 26 14:18:57.063: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 26 14:19:01.079: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 26 14:19:09.123: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-6010,SelfLink:/apis/apps/v1/namespaces/deployment-6010/deployments/test-cleanup-deployment,UID:276156a0-a4f9-4b74-bc27-fa3bd912a68c,ResourceVersion:21946864,Generation:1,CreationTimestamp:2020-01-26 14:19:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-26 14:19:01 +0000 UTC 2020-01-26 14:19:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-26 14:19:08 +0000 UTC 2020-01-26 14:19:01 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 26 14:19:09.127: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-6010,SelfLink:/apis/apps/v1/namespaces/deployment-6010/replicasets/test-cleanup-deployment-55bbcbc84c,UID:260adb19-d6b9-43ef-92a7-6c0867a53b9d,ResourceVersion:21946855,Generation:1,CreationTimestamp:2020-01-26 14:19:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 276156a0-a4f9-4b74-bc27-fa3bd912a68c 0xc0015c30f7 0xc0015c30f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 26 14:19:09.132: INFO: Pod "test-cleanup-deployment-55bbcbc84c-8277r" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-8277r,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-6010,SelfLink:/api/v1/namespaces/deployment-6010/pods/test-cleanup-deployment-55bbcbc84c-8277r,UID:1913abdb-df6a-42ac-a5f6-cdd9efdf283a,ResourceVersion:21946854,Generation:0,CreationTimestamp:2020-01-26 14:19:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 260adb19-d6b9-43ef-92a7-6c0867a53b9d 0xc0039b9557 0xc0039b9558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-csv2j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-csv2j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-csv2j true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0039b95d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0039b95f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:19:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:19:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:19:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:19:01 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-26 14:19:01 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-26 14:19:07 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://aef86ca364636f538e059bc2d65a2f1cef2890d34fc11757a214367823b796fc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:19:09.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6010" for this suite.
Jan 26 14:19:15.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:19:15.290: INFO: namespace deployment-6010 deletion completed in 6.15197228s

• [SLOW TEST:23.382 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:19:15.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-c95d0d1e-d0be-4a29-829d-a76981d68482
STEP: Creating a pod to test consume configMaps
Jan 26 14:19:15.481: INFO: Waiting up to 5m0s for pod "pod-configmaps-4d1c03dc-fec2-4fc2-a170-f28212c68942" in namespace "configmap-1323" to be "success or failure"
Jan 26 14:19:15.495: INFO: Pod "pod-configmaps-4d1c03dc-fec2-4fc2-a170-f28212c68942": Phase="Pending", Reason="", readiness=false. Elapsed: 13.61025ms
Jan 26 14:19:17.503: INFO: Pod "pod-configmaps-4d1c03dc-fec2-4fc2-a170-f28212c68942": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021792797s
Jan 26 14:19:19.510: INFO: Pod "pod-configmaps-4d1c03dc-fec2-4fc2-a170-f28212c68942": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028949261s
Jan 26 14:19:21.519: INFO: Pod "pod-configmaps-4d1c03dc-fec2-4fc2-a170-f28212c68942": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037785332s
Jan 26 14:19:23.525: INFO: Pod "pod-configmaps-4d1c03dc-fec2-4fc2-a170-f28212c68942": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043637212s
STEP: Saw pod success
Jan 26 14:19:23.525: INFO: Pod "pod-configmaps-4d1c03dc-fec2-4fc2-a170-f28212c68942" satisfied condition "success or failure"
Jan 26 14:19:23.528: INFO: Trying to get logs from node iruya-node pod pod-configmaps-4d1c03dc-fec2-4fc2-a170-f28212c68942 container configmap-volume-test: 
STEP: delete the pod
Jan 26 14:19:23.577: INFO: Waiting for pod pod-configmaps-4d1c03dc-fec2-4fc2-a170-f28212c68942 to disappear
Jan 26 14:19:23.634: INFO: Pod pod-configmaps-4d1c03dc-fec2-4fc2-a170-f28212c68942 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:19:23.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1323" for this suite.
Jan 26 14:19:29.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:19:29.837: INFO: namespace configmap-1323 deletion completed in 6.19159689s

• [SLOW TEST:14.546 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:19:29.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6726
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-6726
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6726
Jan 26 14:19:30.035: INFO: Found 0 stateful pods, waiting for 1
Jan 26 14:19:40.049: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 26 14:19:40.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 14:19:40.708: INFO: stderr: "I0126 14:19:40.326130    2171 log.go:172] (0xc0009e2420) (0xc000374820) Create stream\nI0126 14:19:40.326355    2171 log.go:172] (0xc0009e2420) (0xc000374820) Stream added, broadcasting: 1\nI0126 14:19:40.335508    2171 log.go:172] (0xc0009e2420) Reply frame received for 1\nI0126 14:19:40.335647    2171 log.go:172] (0xc0009e2420) (0xc00098c000) Create stream\nI0126 14:19:40.335687    2171 log.go:172] (0xc0009e2420) (0xc00098c000) Stream added, broadcasting: 3\nI0126 14:19:40.344737    2171 log.go:172] (0xc0009e2420) Reply frame received for 3\nI0126 14:19:40.344812    2171 log.go:172] (0xc0009e2420) (0xc00098c0a0) Create stream\nI0126 14:19:40.344831    2171 log.go:172] (0xc0009e2420) (0xc00098c0a0) Stream added, broadcasting: 5\nI0126 14:19:40.349597    2171 log.go:172] (0xc0009e2420) Reply frame received for 5\nI0126 14:19:40.512319    2171 log.go:172] (0xc0009e2420) Data frame received for 5\nI0126 14:19:40.512369    2171 log.go:172] (0xc00098c0a0) (5) Data frame handling\nI0126 14:19:40.512405    2171 log.go:172] (0xc00098c0a0) (5) Data frame sent\nI0126 14:19:40.512417    2171 log.go:172] (0xc0009e2420) Data frame received for 5\n+ mv -v /usr/share/nginx/html/index.htmlI0126 14:19:40.512440    2171 log.go:172] (0xc00098c0a0) (5) Data frame handling\nI0126 14:19:40.512538    2171 log.go:172] (0xc00098c0a0) (5) Data frame sent\n /tmp/\nI0126 14:19:40.572273    2171 log.go:172] (0xc0009e2420) Data frame received for 3\nI0126 14:19:40.572364    2171 log.go:172] (0xc00098c000) (3) Data frame handling\nI0126 14:19:40.572388    2171 log.go:172] (0xc00098c000) (3) Data frame sent\nI0126 14:19:40.697100    2171 log.go:172] (0xc0009e2420) (0xc00098c000) Stream removed, broadcasting: 3\nI0126 14:19:40.697365    2171 log.go:172] (0xc0009e2420) Data frame received for 1\nI0126 14:19:40.697457    2171 log.go:172] (0xc0009e2420) (0xc00098c0a0) Stream removed, broadcasting: 5\nI0126 14:19:40.697521    2171 log.go:172] (0xc000374820) (1) Data frame handling\nI0126 14:19:40.697599    2171 log.go:172] (0xc000374820) (1) Data frame sent\nI0126 14:19:40.697664    2171 log.go:172] (0xc0009e2420) (0xc000374820) Stream removed, broadcasting: 1\nI0126 14:19:40.697684    2171 log.go:172] (0xc0009e2420) Go away received\nI0126 14:19:40.699051    2171 log.go:172] (0xc0009e2420) (0xc000374820) Stream removed, broadcasting: 1\nI0126 14:19:40.699068    2171 log.go:172] (0xc0009e2420) (0xc00098c000) Stream removed, broadcasting: 3\nI0126 14:19:40.699076    2171 log.go:172] (0xc0009e2420) (0xc00098c0a0) Stream removed, broadcasting: 5\n"
Jan 26 14:19:40.709: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 14:19:40.709: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 14:19:40.717: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 26 14:19:50.729: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 14:19:50.729: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 14:19:50.756: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999612s
Jan 26 14:19:51.765: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.98812487s
Jan 26 14:19:52.773: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.978652987s
Jan 26 14:19:53.793: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.970292039s
Jan 26 14:19:54.808: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.95032406s
Jan 26 14:19:55.827: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.935155981s
Jan 26 14:19:56.838: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.916715919s
Jan 26 14:19:57.851: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.906150433s
Jan 26 14:19:58.868: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.892878319s
Jan 26 14:19:59.878: INFO: Verifying statefulset ss doesn't scale past 1 for another 875.238067ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6726
Jan 26 14:20:00.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:20:01.410: INFO: stderr: "I0126 14:20:01.092726    2191 log.go:172] (0xc00012adc0) (0xc000736aa0) Create stream\nI0126 14:20:01.092983    2191 log.go:172] (0xc00012adc0) (0xc000736aa0) Stream added, broadcasting: 1\nI0126 14:20:01.101669    2191 log.go:172] (0xc00012adc0) Reply frame received for 1\nI0126 14:20:01.101751    2191 log.go:172] (0xc00012adc0) (0xc00082a000) Create stream\nI0126 14:20:01.101767    2191 log.go:172] (0xc00012adc0) (0xc00082a000) Stream added, broadcasting: 3\nI0126 14:20:01.103620    2191 log.go:172] (0xc00012adc0) Reply frame received for 3\nI0126 14:20:01.103651    2191 log.go:172] (0xc00012adc0) (0xc0009ce000) Create stream\nI0126 14:20:01.103680    2191 log.go:172] (0xc00012adc0) (0xc0009ce000) Stream added, broadcasting: 5\nI0126 14:20:01.105176    2191 log.go:172] (0xc00012adc0) Reply frame received for 5\nI0126 14:20:01.251925    2191 log.go:172] (0xc00012adc0) Data frame received for 3\nI0126 14:20:01.252059    2191 log.go:172] (0xc00082a000) (3) Data frame handling\nI0126 14:20:01.252096    2191 log.go:172] (0xc00082a000) (3) Data frame sent\nI0126 14:20:01.252152    2191 log.go:172] (0xc00012adc0) Data frame received for 5\nI0126 14:20:01.252189    2191 log.go:172] (0xc0009ce000) (5) Data frame handling\nI0126 14:20:01.252206    2191 log.go:172] (0xc0009ce000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0126 14:20:01.399522    2191 log.go:172] (0xc00012adc0) Data frame received for 1\nI0126 14:20:01.399601    2191 log.go:172] (0xc00012adc0) (0xc00082a000) Stream removed, broadcasting: 3\nI0126 14:20:01.399645    2191 log.go:172] (0xc000736aa0) (1) Data frame handling\nI0126 14:20:01.399692    2191 log.go:172] (0xc000736aa0) (1) Data frame sent\nI0126 14:20:01.399711    2191 log.go:172] (0xc00012adc0) (0xc000736aa0) Stream removed, broadcasting: 1\nI0126 14:20:01.400032    2191 log.go:172] (0xc00012adc0) (0xc0009ce000) Stream removed, broadcasting: 5\nI0126 14:20:01.400278    2191 log.go:172] (0xc00012adc0) Go away received\nI0126 14:20:01.400544    2191 log.go:172] (0xc00012adc0) (0xc000736aa0) Stream removed, broadcasting: 1\nI0126 14:20:01.400591    2191 log.go:172] (0xc00012adc0) (0xc00082a000) Stream removed, broadcasting: 3\nI0126 14:20:01.400840    2191 log.go:172] (0xc00012adc0) (0xc0009ce000) Stream removed, broadcasting: 5\n"
Jan 26 14:20:01.410: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 26 14:20:01.410: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 26 14:20:01.415: INFO: Found 1 stateful pods, waiting for 3
Jan 26 14:20:11.430: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 14:20:11.430: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 14:20:11.430: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 26 14:20:21.426: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 14:20:21.426: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 14:20:21.426: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 26 14:20:21.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 14:20:21.961: INFO: stderr: "I0126 14:20:21.632777    2212 log.go:172] (0xc0008bc0b0) (0xc000712640) Create stream\nI0126 14:20:21.632856    2212 log.go:172] (0xc0008bc0b0) (0xc000712640) Stream added, broadcasting: 1\nI0126 14:20:21.638470    2212 log.go:172] (0xc0008bc0b0) Reply frame received for 1\nI0126 14:20:21.638600    2212 log.go:172] (0xc0008bc0b0) (0xc000904000) Create stream\nI0126 14:20:21.638613    2212 log.go:172] (0xc0008bc0b0) (0xc000904000) Stream added, broadcasting: 3\nI0126 14:20:21.639696    2212 log.go:172] (0xc0008bc0b0) Reply frame received for 3\nI0126 14:20:21.639718    2212 log.go:172] (0xc0008bc0b0) (0xc0007126e0) Create stream\nI0126 14:20:21.639725    2212 log.go:172] (0xc0008bc0b0) (0xc0007126e0) Stream added, broadcasting: 5\nI0126 14:20:21.640956    2212 log.go:172] (0xc0008bc0b0) Reply frame received for 5\nI0126 14:20:21.737524    2212 log.go:172] (0xc0008bc0b0) Data frame received for 5\nI0126 14:20:21.737676    2212 log.go:172] (0xc0007126e0) (5) Data frame handling\nI0126 14:20:21.737688    2212 log.go:172] (0xc0007126e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0126 14:20:21.737731    2212 log.go:172] (0xc0008bc0b0) Data frame received for 3\nI0126 14:20:21.737755    2212 log.go:172] (0xc000904000) (3) Data frame handling\nI0126 14:20:21.737779    2212 log.go:172] (0xc000904000) (3) Data frame sent\nI0126 14:20:21.948781    2212 log.go:172] (0xc0008bc0b0) (0xc000904000) Stream removed, broadcasting: 3\nI0126 14:20:21.949058    2212 log.go:172] (0xc0008bc0b0) Data frame received for 1\nI0126 14:20:21.949074    2212 log.go:172] (0xc000712640) (1) Data frame handling\nI0126 14:20:21.949086    2212 log.go:172] (0xc000712640) (1) Data frame sent\nI0126 14:20:21.949412    2212 log.go:172] (0xc0008bc0b0) (0xc0007126e0) Stream removed, broadcasting: 5\nI0126 14:20:21.949514    2212 log.go:172] (0xc0008bc0b0) (0xc000712640) Stream removed, broadcasting: 1\nI0126 14:20:21.949542    2212 log.go:172] (0xc0008bc0b0) Go away received\nI0126 14:20:21.951737    2212 log.go:172] (0xc0008bc0b0) (0xc000712640) Stream removed, broadcasting: 1\nI0126 14:20:21.952049    2212 log.go:172] (0xc0008bc0b0) (0xc000904000) Stream removed, broadcasting: 3\nI0126 14:20:21.952094    2212 log.go:172] (0xc0008bc0b0) (0xc0007126e0) Stream removed, broadcasting: 5\n"
Jan 26 14:20:21.962: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 14:20:21.962: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 14:20:21.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 14:20:22.752: INFO: stderr: "I0126 14:20:22.326156    2229 log.go:172] (0xc000726370) (0xc0004468c0) Create stream\nI0126 14:20:22.326597    2229 log.go:172] (0xc000726370) (0xc0004468c0) Stream added, broadcasting: 1\nI0126 14:20:22.332089    2229 log.go:172] (0xc000726370) Reply frame received for 1\nI0126 14:20:22.332196    2229 log.go:172] (0xc000726370) (0xc000664000) Create stream\nI0126 14:20:22.332233    2229 log.go:172] (0xc000726370) (0xc000664000) Stream added, broadcasting: 3\nI0126 14:20:22.333687    2229 log.go:172] (0xc000726370) Reply frame received for 3\nI0126 14:20:22.333743    2229 log.go:172] (0xc000726370) (0xc00062c000) Create stream\nI0126 14:20:22.333774    2229 log.go:172] (0xc000726370) (0xc00062c000) Stream added, broadcasting: 5\nI0126 14:20:22.334877    2229 log.go:172] (0xc000726370) Reply frame received for 5\nI0126 14:20:22.505191    2229 log.go:172] (0xc000726370) Data frame received for 5\nI0126 14:20:22.505267    2229 log.go:172] (0xc00062c000) (5) Data frame handling\nI0126 14:20:22.505290    2229 log.go:172] (0xc00062c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0126 14:20:22.585653    2229 log.go:172] (0xc000726370) Data frame received for 3\nI0126 14:20:22.585805    2229 log.go:172] (0xc000664000) (3) Data frame handling\nI0126 14:20:22.585844    2229 log.go:172] (0xc000664000) (3) Data frame sent\nI0126 14:20:22.735577    2229 log.go:172] (0xc000726370) (0xc000664000) Stream removed, broadcasting: 3\nI0126 14:20:22.735768    2229 log.go:172] (0xc000726370) Data frame received for 1\nI0126 14:20:22.735788    2229 log.go:172] (0xc0004468c0) (1) Data frame handling\nI0126 14:20:22.735812    2229 log.go:172] (0xc0004468c0) (1) Data frame sent\nI0126 14:20:22.735895    2229 log.go:172] (0xc000726370) (0xc0004468c0) Stream removed, broadcasting: 1\nI0126 14:20:22.736769    2229 log.go:172] (0xc000726370) (0xc00062c000) Stream removed, broadcasting: 5\nI0126 14:20:22.736842    2229 log.go:172] (0xc000726370) (0xc0004468c0) Stream removed, broadcasting: 1\nI0126 14:20:22.736854    2229 log.go:172] (0xc000726370) (0xc000664000) Stream removed, broadcasting: 3\nI0126 14:20:22.736862    2229 log.go:172] (0xc000726370) (0xc00062c000) Stream removed, broadcasting: 5\nI0126 14:20:22.737131    2229 log.go:172] (0xc000726370) Go away received\n"
Jan 26 14:20:22.753: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 14:20:22.753: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 14:20:22.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 14:20:23.297: INFO: stderr: "I0126 14:20:22.936727    2247 log.go:172] (0xc0009d2420) (0xc0007fc820) Create stream\nI0126 14:20:22.936833    2247 log.go:172] (0xc0009d2420) (0xc0007fc820) Stream added, broadcasting: 1\nI0126 14:20:22.944869    2247 log.go:172] (0xc0009d2420) Reply frame received for 1\nI0126 14:20:22.944919    2247 log.go:172] (0xc0009d2420) (0xc000298320) Create stream\nI0126 14:20:22.944930    2247 log.go:172] (0xc0009d2420) (0xc000298320) Stream added, broadcasting: 3\nI0126 14:20:22.946956    2247 log.go:172] (0xc0009d2420) Reply frame received for 3\nI0126 14:20:22.946999    2247 log.go:172] (0xc0009d2420) (0xc0009d6000) Create stream\nI0126 14:20:22.947049    2247 log.go:172] (0xc0009d2420) (0xc0009d6000) Stream added, broadcasting: 5\nI0126 14:20:22.949170    2247 log.go:172] (0xc0009d2420) Reply frame received for 5\nI0126 14:20:23.073643    2247 log.go:172] (0xc0009d2420) Data frame received for 5\nI0126 14:20:23.073735    2247 log.go:172] (0xc0009d6000) (5) Data frame handling\nI0126 14:20:23.073755    2247 log.go:172] (0xc0009d6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0126 14:20:23.123109    2247 log.go:172] (0xc0009d2420) Data frame received for 3\nI0126 14:20:23.123201    2247 log.go:172] (0xc000298320) (3) Data frame handling\nI0126 14:20:23.123240    2247 log.go:172] (0xc000298320) (3) Data frame sent\nI0126 14:20:23.289700    2247 log.go:172] (0xc0009d2420) Data frame received for 1\nI0126 14:20:23.289736    2247 log.go:172] (0xc0009d2420) (0xc000298320) Stream removed, broadcasting: 3\nI0126 14:20:23.289785    2247 log.go:172] (0xc0007fc820) (1) Data frame handling\nI0126 14:20:23.289805    2247 log.go:172] (0xc0007fc820) (1) Data frame sent\nI0126 14:20:23.289940    2247 log.go:172] (0xc0009d2420) (0xc0009d6000) Stream removed, broadcasting: 5\nI0126 14:20:23.289999    2247 log.go:172] (0xc0009d2420) (0xc0007fc820) Stream removed, broadcasting: 1\nI0126 14:20:23.290026    2247 log.go:172] (0xc0009d2420) Go away received\nI0126 14:20:23.291021    2247 log.go:172] (0xc0009d2420) (0xc0007fc820) Stream removed, broadcasting: 1\nI0126 14:20:23.291048    2247 log.go:172] (0xc0009d2420) (0xc000298320) Stream removed, broadcasting: 3\nI0126 14:20:23.291062    2247 log.go:172] (0xc0009d2420) (0xc0009d6000) Stream removed, broadcasting: 5\n"
Jan 26 14:20:23.297: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 14:20:23.297: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 14:20:23.297: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 14:20:23.302: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 26 14:20:33.327: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 14:20:33.327: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 14:20:33.327: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 14:20:33.347: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999591s
Jan 26 14:20:34.355: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989182567s
Jan 26 14:20:35.368: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.980899071s
Jan 26 14:20:36.393: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.968493409s
Jan 26 14:20:37.401: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.94385397s
Jan 26 14:20:38.420: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.935174103s
Jan 26 14:20:39.443: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.91614955s
Jan 26 14:20:40.461: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.893042898s
Jan 26 14:20:41.470: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.875714378s
Jan 26 14:20:42.516: INFO: Verifying statefulset ss doesn't scale past 3 for another 865.778991ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6726
Jan 26 14:20:43.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:20:46.786: INFO: stderr: "I0126 14:20:46.334861    2267 log.go:172] (0xc0008ee210) (0xc0005c4820) Create stream\nI0126 14:20:46.334973    2267 log.go:172] (0xc0008ee210) (0xc0005c4820) Stream added, broadcasting: 1\nI0126 14:20:46.343885    2267 log.go:172] (0xc0008ee210) Reply frame received for 1\nI0126 14:20:46.343930    2267 log.go:172] (0xc0008ee210) (0xc00029fa40) Create stream\nI0126 14:20:46.343941    2267 log.go:172] (0xc0008ee210) (0xc00029fa40) Stream added, broadcasting: 3\nI0126 14:20:46.345892    2267 log.go:172] (0xc0008ee210) Reply frame received for 3\nI0126 14:20:46.345928    2267 log.go:172] (0xc0008ee210) (0xc0007120a0) Create stream\nI0126 14:20:46.345942    2267 log.go:172] (0xc0008ee210) (0xc0007120a0) Stream added, broadcasting: 5\nI0126 14:20:46.347783    2267 log.go:172] (0xc0008ee210) Reply frame received for 5\nI0126 14:20:46.517740    2267 log.go:172] (0xc0008ee210) Data frame received for 5\nI0126 14:20:46.517892    2267 log.go:172] (0xc0007120a0) (5) Data frame handling\nI0126 14:20:46.517937    2267 log.go:172] (0xc0007120a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0126 14:20:46.518024    2267 log.go:172] (0xc0008ee210) Data frame received for 3\nI0126 14:20:46.518109    2267 log.go:172] (0xc00029fa40) (3) Data frame handling\nI0126 14:20:46.518152    2267 log.go:172] (0xc00029fa40) (3) Data frame sent\nI0126 14:20:46.776212    2267 log.go:172] (0xc0008ee210) (0xc0007120a0) Stream removed, broadcasting: 5\nI0126 14:20:46.776479    2267 log.go:172] (0xc0008ee210) Data frame received for 1\nI0126 14:20:46.776501    2267 log.go:172] (0xc0005c4820) (1) Data frame handling\nI0126 14:20:46.776532    2267 log.go:172] (0xc0008ee210) (0xc00029fa40) Stream removed, broadcasting: 3\nI0126 14:20:46.776566    2267 log.go:172] (0xc0005c4820) (1) Data frame sent\nI0126 14:20:46.776579    2267 log.go:172] (0xc0008ee210) (0xc0005c4820) Stream removed, broadcasting: 1\nI0126 14:20:46.776595    2267 log.go:172] (0xc0008ee210) Go away received\nI0126 14:20:46.777886    2267 log.go:172] (0xc0008ee210) (0xc0005c4820) Stream removed, broadcasting: 1\nI0126 14:20:46.777905    2267 log.go:172] (0xc0008ee210) (0xc00029fa40) Stream removed, broadcasting: 3\nI0126 14:20:46.777917    2267 log.go:172] (0xc0008ee210) (0xc0007120a0) Stream removed, broadcasting: 5\n"
Jan 26 14:20:46.786: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 26 14:20:46.786: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 26 14:20:46.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:20:47.197: INFO: stderr: "I0126 14:20:46.933364    2293 log.go:172] (0xc00078c420) (0xc0006b8780) Create stream\nI0126 14:20:46.933614    2293 log.go:172] (0xc00078c420) (0xc0006b8780) Stream added, broadcasting: 1\nI0126 14:20:46.939029    2293 log.go:172] (0xc00078c420) Reply frame received for 1\nI0126 14:20:46.939196    2293 log.go:172] (0xc00078c420) (0xc000400320) Create stream\nI0126 14:20:46.939267    2293 log.go:172] (0xc00078c420) (0xc000400320) Stream added, broadcasting: 3\nI0126 14:20:46.942268    2293 log.go:172] (0xc00078c420) Reply frame received for 3\nI0126 14:20:46.942328    2293 log.go:172] (0xc00078c420) (0xc0006e8000) Create stream\nI0126 14:20:46.942342    2293 log.go:172] (0xc00078c420) (0xc0006e8000) Stream added, broadcasting: 5\nI0126 14:20:46.944085    2293 log.go:172] (0xc00078c420) Reply frame received for 5\nI0126 14:20:47.021829    2293 log.go:172] (0xc00078c420) Data frame received for 5\nI0126 14:20:47.021884    2293 log.go:172] (0xc0006e8000) (5) Data frame handling\nI0126 14:20:47.021901    2293 log.go:172] (0xc0006e8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0126 14:20:47.021927    2293 log.go:172] (0xc00078c420) Data frame received for 3\nI0126 14:20:47.021956    2293 log.go:172] (0xc000400320) (3) Data frame handling\nI0126 14:20:47.021973    2293 log.go:172] (0xc000400320) (3) Data frame sent\nI0126 14:20:47.173128    2293 log.go:172] (0xc00078c420) Data frame received for 1\nI0126 14:20:47.173279    2293 log.go:172] (0xc00078c420) (0xc0006e8000) Stream removed, broadcasting: 5\nI0126 14:20:47.173371    2293 log.go:172] (0xc0006b8780) (1) Data frame handling\nI0126 14:20:47.173434    2293 log.go:172] (0xc0006b8780) (1) Data frame sent\nI0126 14:20:47.173835    2293 log.go:172] (0xc00078c420) (0xc000400320) Stream removed, broadcasting: 3\nI0126 14:20:47.174602    2293 log.go:172] (0xc00078c420) (0xc0006b8780) Stream removed, broadcasting: 1\nI0126 14:20:47.174731    2293 log.go:172] (0xc00078c420) Go away received\nI0126 14:20:47.176220    2293 log.go:172] (0xc00078c420) (0xc0006b8780) Stream removed, broadcasting: 1\nI0126 14:20:47.176234    2293 log.go:172] (0xc00078c420) (0xc000400320) Stream removed, broadcasting: 3\nI0126 14:20:47.176241    2293 log.go:172] (0xc00078c420) (0xc0006e8000) Stream removed, broadcasting: 5\n"
Jan 26 14:20:47.197: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 26 14:20:47.197: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 26 14:20:47.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:20:47.572: INFO: rc: 126
Jan 26 14:20:47.572: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   cannot exec in a stopped state: unknown
 I0126 14:20:47.526347    2312 log.go:172] (0xc0008d4370) (0xc0008f4780) Create stream
I0126 14:20:47.526524    2312 log.go:172] (0xc0008d4370) (0xc0008f4780) Stream added, broadcasting: 1
I0126 14:20:47.533303    2312 log.go:172] (0xc0008d4370) Reply frame received for 1
I0126 14:20:47.533325    2312 log.go:172] (0xc0008d4370) (0xc0008f4820) Create stream
I0126 14:20:47.533332    2312 log.go:172] (0xc0008d4370) (0xc0008f4820) Stream added, broadcasting: 3
I0126 14:20:47.534330    2312 log.go:172] (0xc0008d4370) Reply frame received for 3
I0126 14:20:47.534350    2312 log.go:172] (0xc0008d4370) (0xc0005d4320) Create stream
I0126 14:20:47.534357    2312 log.go:172] (0xc0008d4370) (0xc0005d4320) Stream added, broadcasting: 5
I0126 14:20:47.535410    2312 log.go:172] (0xc0008d4370) Reply frame received for 5
I0126 14:20:47.557521    2312 log.go:172] (0xc0008d4370) Data frame received for 3
I0126 14:20:47.557534    2312 log.go:172] (0xc0008f4820) (3) Data frame handling
I0126 14:20:47.557546    2312 log.go:172] (0xc0008f4820) (3) Data frame sent
I0126 14:20:47.562222    2312 log.go:172] (0xc0008d4370) Data frame received for 1
I0126 14:20:47.562280    2312 log.go:172] (0xc0008f4780) (1) Data frame handling
I0126 14:20:47.562293    2312 log.go:172] (0xc0008f4780) (1) Data frame sent
I0126 14:20:47.562310    2312 log.go:172] (0xc0008d4370) (0xc0008f4780) Stream removed, broadcasting: 1
I0126 14:20:47.563070    2312 log.go:172] (0xc0008d4370) (0xc0008f4820) Stream removed, broadcasting: 3
I0126 14:20:47.563228    2312 log.go:172] (0xc0008d4370) (0xc0005d4320) Stream removed, broadcasting: 5
I0126 14:20:47.563281    2312 log.go:172] (0xc0008d4370) (0xc0008f4780) Stream removed, broadcasting: 1
I0126 14:20:47.563291    2312 log.go:172] (0xc0008d4370) (0xc0008f4820) Stream removed, broadcasting: 3
I0126 14:20:47.563297    2312 log.go:172] (0xc0008d4370) (0xc0005d4320) Stream removed, broadcasting: 5
I0126 14:20:47.563438    2312 log.go:172] (0xc0008d4370) Go away received
command terminated with exit code 126
 []  0xc0028a2240 exit status 126   true [0xc000d581d8 0xc000d58538 0xc000d58740] [0xc000d581d8 0xc000d58538 0xc000d58740] [0xc000d583b8 0xc000d58708] [0xba6c50 0xba6c50] 0xc002776540 }:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
I0126 14:20:47.526347    2312 log.go:172] (0xc0008d4370) (0xc0008f4780) Create stream
I0126 14:20:47.526524    2312 log.go:172] (0xc0008d4370) (0xc0008f4780) Stream added, broadcasting: 1
I0126 14:20:47.533303    2312 log.go:172] (0xc0008d4370) Reply frame received for 1
I0126 14:20:47.533325    2312 log.go:172] (0xc0008d4370) (0xc0008f4820) Create stream
I0126 14:20:47.533332    2312 log.go:172] (0xc0008d4370) (0xc0008f4820) Stream added, broadcasting: 3
I0126 14:20:47.534330    2312 log.go:172] (0xc0008d4370) Reply frame received for 3
I0126 14:20:47.534350    2312 log.go:172] (0xc0008d4370) (0xc0005d4320) Create stream
I0126 14:20:47.534357    2312 log.go:172] (0xc0008d4370) (0xc0005d4320) Stream added, broadcasting: 5
I0126 14:20:47.535410    2312 log.go:172] (0xc0008d4370) Reply frame received for 5
I0126 14:20:47.557521    2312 log.go:172] (0xc0008d4370) Data frame received for 3
I0126 14:20:47.557534    2312 log.go:172] (0xc0008f4820) (3) Data frame handling
I0126 14:20:47.557546    2312 log.go:172] (0xc0008f4820) (3) Data frame sent
I0126 14:20:47.562222    2312 log.go:172] (0xc0008d4370) Data frame received for 1
I0126 14:20:47.562280    2312 log.go:172] (0xc0008f4780) (1) Data frame handling
I0126 14:20:47.562293    2312 log.go:172] (0xc0008f4780) (1) Data frame sent
I0126 14:20:47.562310    2312 log.go:172] (0xc0008d4370) (0xc0008f4780) Stream removed, broadcasting: 1
I0126 14:20:47.563070    2312 log.go:172] (0xc0008d4370) (0xc0008f4820) Stream removed, broadcasting: 3
I0126 14:20:47.563228    2312 log.go:172] (0xc0008d4370) (0xc0005d4320) Stream removed, broadcasting: 5
I0126 14:20:47.563281    2312 log.go:172] (0xc0008d4370) (0xc0008f4780) Stream removed, broadcasting: 1
I0126 14:20:47.563291    2312 log.go:172] (0xc0008d4370) (0xc0008f4820) Stream removed, broadcasting: 3
I0126 14:20:47.563297    2312 log.go:172] (0xc0008d4370) (0xc0005d4320) Stream removed, broadcasting: 5
I0126 14:20:47.563438    2312 log.go:172] (0xc0008d4370) Go away received
command terminated with exit code 126

error:
exit status 126
Jan 26 14:20:57.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:20:57.920: INFO: rc: 1
Jan 26 14:20:57.921: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0011b0ba0 exit status 1   true [0xc000dff290 0xc000dff2c8 0xc000dff318] [0xc000dff290 0xc000dff2c8 0xc000dff318] [0xc000dff2b8 0xc000dff2f8] [0xba6c50 0xba6c50] 0xc002edf500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:21:07.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:21:08.114: INFO: rc: 1
Jan 26 14:21:08.114: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0011b0cc0 exit status 1   true [0xc000dff330 0xc000dff438 0xc000dff528] [0xc000dff330 0xc000dff438 0xc000dff528] [0xc000dff3f0 0xc000dff4d0] [0xba6c50 0xba6c50] 0xc002edf800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:21:18.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:21:18.267: INFO: rc: 1
Jan 26 14:21:18.267: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0020f6a20 exit status 1   true [0xc0000f1460 0xc0000f1698 0xc0000f1980] [0xc0000f1460 0xc0000f1698 0xc0000f1980] [0xc0000f1610 0xc0000f18c0] [0xba6c50 0xba6c50] 0xc002829260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:21:28.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:21:28.475: INFO: rc: 1
Jan 26 14:21:28.476: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0011b0db0 exit status 1   true [0xc000dff568 0xc000dff610 0xc000dff6a0] [0xc000dff568 0xc000dff610 0xc000dff6a0] [0xc000dff5e8 0xc000dff688] [0xba6c50 0xba6c50] 0xc002edfb00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:21:38.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:21:38.632: INFO: rc: 1
Jan 26 14:21:38.633: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0020f6ae0 exit status 1   true [0xc0000f19c8 0xc0000f1b98 0xc0000f1d70] [0xc0000f19c8 0xc0000f1b98 0xc0000f1d70] [0xc0000f1ad0 0xc0000f1d10] [0xba6c50 0xba6c50] 0xc002829800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:21:48.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:21:48.818: INFO: rc: 1
Jan 26 14:21:48.818: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0020f6bd0 exit status 1   true [0xc0000f1d80 0xc0000f1e60 0xc0000f1f60] [0xc0000f1d80 0xc0000f1e60 0xc0000f1f60] [0xc0000f1dc8 0xc0000f1ef0] [0xba6c50 0xba6c50] 0xc002829ce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:21:58.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:21:58.968: INFO: rc: 1
Jan 26 14:21:58.968: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002ecb920 exit status 1   true [0xc001c3e110 0xc001c3e148 0xc001c3e178] [0xc001c3e110 0xc001c3e148 0xc001c3e178] [0xc001c3e140 0xc001c3e168] [0xba6c50 0xba6c50] 0xc00209efc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:22:08.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:22:09.156: INFO: rc: 1
Jan 26 14:22:09.157: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002ecb9e0 exit status 1   true [0xc001c3e188 0xc001c3e1a0 0xc001c3e200] [0xc001c3e188 0xc001c3e1a0 0xc001c3e200] [0xc001c3e198 0xc001c3e1f8] [0xba6c50 0xba6c50] 0xc00209f4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:22:19.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:22:19.349: INFO: rc: 1
Jan 26 14:22:19.350: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0011b0ed0 exit status 1   true [0xc000dff6b0 0xc000dff718 0xc000dff768] [0xc000dff6b0 0xc000dff718 0xc000dff768] [0xc000dff6e8 0xc000dff750] [0xba6c50 0xba6c50] 0xc002edfe00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:22:29.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:22:29.548: INFO: rc: 1
Jan 26 14:22:29.549: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0027ca0c0 exit status 1   true [0xc002636008 0xc002636020 0xc002636038] [0xc002636008 0xc002636020 0xc002636038] [0xc002636018 0xc002636030] [0xba6c50 0xba6c50] 0xc00173c240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:22:39.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:22:39.693: INFO: rc: 1
Jan 26 14:22:39.694: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0020540c0 exit status 1   true [0xc0000114e0 0xc0001a7dc8 0xc0000f07e8] [0xc0000114e0 0xc0001a7dc8 0xc0000f07e8] [0xc000011f38 0xc0000f07b0] [0xba6c50 0xba6c50] 0xc002828420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:22:49.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:22:49.895: INFO: rc: 1
Jan 26 14:22:49.895: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002054180 exit status 1   true [0xc0000f08d8 0xc0000f09a0 0xc0000f0ac8] [0xc0000f08d8 0xc0000f09a0 0xc0000f0ac8] [0xc0000f0968 0xc0000f0a30] [0xba6c50 0xba6c50] 0xc0028289c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:22:59.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:23:00.067: INFO: rc: 1
Jan 26 14:23:00.067: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0020f60c0 exit status 1   true [0xc000d58028 0xc000d581d8 0xc000d58538] [0xc000d58028 0xc000d581d8 0xc000d58538] [0xc000d58148 0xc000d583b8] [0xba6c50 0xba6c50] 0xc002776240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:23:10.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:23:10.247: INFO: rc: 1
Jan 26 14:23:10.247: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00287a0c0 exit status 1   true [0xc000dfe080 0xc000dfe150 0xc000dfe228] [0xc000dfe080 0xc000dfe150 0xc000dfe228] [0xc000dfe118 0xc000dfe190] [0xba6c50 0xba6c50] 0xc002ede2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:23:20.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:23:20.446: INFO: rc: 1
Jan 26 14:23:20.447: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002054270 exit status 1   true [0xc0000f0b28 0xc0000f0bf0 0xc0000f0e28] [0xc0000f0b28 0xc0000f0bf0 0xc0000f0e28] [0xc0000f0bc8 0xc0000f0d78] [0xba6c50 0xba6c50] 0xc002828d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:23:30.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:23:30.645: INFO: rc: 1
Jan 26 14:23:30.646: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00287a1b0 exit status 1   true [0xc000dfe298 0xc000dfe4c0 0xc000dfe550] [0xc000dfe298 0xc000dfe4c0 0xc000dfe550] [0xc000dfe480 0xc000dfe530] [0xba6c50 0xba6c50] 0xc002ede5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:23:40.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:23:40.803: INFO: rc: 1
Jan 26 14:23:40.804: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028a2150 exit status 1   true [0xc001c3e000 0xc001c3e070 0xc001c3e0c8] [0xc001c3e000 0xc001c3e070 0xc001c3e0c8] [0xc001c3e040 0xc001c3e0c0] [0xba6c50 0xba6c50] 0xc00209e360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:23:50.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:23:50.919: INFO: rc: 1
Jan 26 14:23:50.919: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0020f6180 exit status 1   true [0xc000d58670 0xc000d587d0 0xc000d588a0] [0xc000d58670 0xc000d587d0 0xc000d588a0] [0xc000d58740 0xc000d58898] [0xba6c50 0xba6c50] 0xc002776540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:24:00.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:24:01.093: INFO: rc: 1
Jan 26 14:24:01.093: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0020f6240 exit status 1   true [0xc000d588c0 0xc000d58998 0xc000d58bf8] [0xc000d588c0 0xc000d58998 0xc000d58bf8] [0xc000d58930 0xc000d58be0] [0xba6c50 0xba6c50] 0xc002776840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:24:11.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:24:11.256: INFO: rc: 1
Jan 26 14:24:11.256: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0020f6330 exit status 1   true [0xc000d58cd0 0xc000d58f50 0xc000d592e0] [0xc000d58cd0 0xc000d58f50 0xc000d592e0] [0xc000d58e20 0xc000d591d0] [0xba6c50 0xba6c50] 0xc002776b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:24:21.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:24:21.417: INFO: rc: 1
Jan 26 14:24:21.418: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002054330 exit status 1   true [0xc0000f0f00 0xc0000f1460 0xc0000f1698] [0xc0000f0f00 0xc0000f1460 0xc0000f1698] [0xc0000f1240 0xc0000f1610] [0xba6c50 0xba6c50] 0xc002829260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:24:31.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:24:31.568: INFO: rc: 1
Jan 26 14:24:31.568: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002054090 exit status 1   true [0xc0000114e0 0xc0000f0680 0xc0000f08d8] [0xc0000114e0 0xc0000f0680 0xc0000f08d8] [0xc000011f38 0xc0000f07e8] [0xba6c50 0xba6c50] 0xc002828420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:24:41.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:24:41.768: INFO: rc: 1
Jan 26 14:24:41.769: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028a2090 exit status 1   true [0xc000dfe080 0xc000dfe150 0xc000dfe228] [0xc000dfe080 0xc000dfe150 0xc000dfe228] [0xc000dfe118 0xc000dfe190] [0xba6c50 0xba6c50] 0xc002ede2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:24:51.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:24:51.926: INFO: rc: 1
Jan 26 14:24:51.927: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00287a0f0 exit status 1   true [0xc001c3e000 0xc001c3e070 0xc001c3e0c8] [0xc001c3e000 0xc001c3e070 0xc001c3e0c8] [0xc001c3e040 0xc001c3e0c0] [0xba6c50 0xba6c50] 0xc00209e360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:25:01.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:25:02.067: INFO: rc: 1
Jan 26 14:25:02.067: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00287a1e0 exit status 1   true [0xc001c3e0e8 0xc001c3e110 0xc001c3e148] [0xc001c3e0e8 0xc001c3e110 0xc001c3e148] [0xc001c3e108 0xc001c3e140] [0xba6c50 0xba6c50] 0xc00209ec00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:25:12.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:25:12.208: INFO: rc: 1
Jan 26 14:25:12.209: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0020f6090 exit status 1   true [0xc000d58028 0xc000d581d8 0xc000d58538] [0xc000d58028 0xc000d581d8 0xc000d58538] [0xc000d58148 0xc000d583b8] [0xba6c50 0xba6c50] 0xc002776240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:25:22.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:25:22.449: INFO: rc: 1
Jan 26 14:25:22.449: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0020541b0 exit status 1   true [0xc0000f08f8 0xc0000f09c0 0xc0000f0b28] [0xc0000f08f8 0xc0000f09c0 0xc0000f0b28] [0xc0000f09a0 0xc0000f0ac8] [0xba6c50 0xba6c50] 0xc0028289c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:25:32.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:25:32.633: INFO: rc: 1
Jan 26 14:25:32.634: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0020f61b0 exit status 1   true [0xc000d58670 0xc000d587d0 0xc000d588a0] [0xc000d58670 0xc000d587d0 0xc000d588a0] [0xc000d58740 0xc000d58898] [0xba6c50 0xba6c50] 0xc002776540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:25:42.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:25:42.811: INFO: rc: 1
Jan 26 14:25:42.812: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0020f62d0 exit status 1   true [0xc000d588c0 0xc000d58998 0xc000d58bf8] [0xc000d588c0 0xc000d58998 0xc000d58bf8] [0xc000d58930 0xc000d58be0] [0xba6c50 0xba6c50] 0xc002776840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 26 14:25:52.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:25:53.026: INFO: rc: 1
Jan 26 14:25:53.026: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Jan 26 14:25:53.027: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 26 14:25:53.041: INFO: Deleting all statefulset in ns statefulset-6726
Jan 26 14:25:53.043: INFO: Scaling statefulset ss to 0
Jan 26 14:25:53.061: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 14:25:53.065: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:25:53.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6726" for this suite.
Jan 26 14:25:59.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:25:59.219: INFO: namespace statefulset-6726 deletion completed in 6.137299125s

• [SLOW TEST:389.381 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:25:59.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 26 14:25:59.286: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 26 14:25:59.294: INFO: Waiting for terminating namespaces to be deleted...
Jan 26 14:25:59.296: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 26 14:25:59.311: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 26 14:25:59.311: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 26 14:25:59.311: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 26 14:25:59.311: INFO: 	Container weave ready: true, restart count 0
Jan 26 14:25:59.311: INFO: 	Container weave-npc ready: true, restart count 0
Jan 26 14:25:59.311: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 26 14:25:59.330: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 26 14:25:59.330: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 26 14:25:59.330: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 26 14:25:59.330: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 26 14:25:59.330: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 26 14:25:59.330: INFO: 	Container coredns ready: true, restart count 0
Jan 26 14:25:59.330: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 26 14:25:59.330: INFO: 	Container etcd ready: true, restart count 0
Jan 26 14:25:59.330: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 26 14:25:59.330: INFO: 	Container weave ready: true, restart count 0
Jan 26 14:25:59.330: INFO: 	Container weave-npc ready: true, restart count 0
Jan 26 14:25:59.330: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 26 14:25:59.330: INFO: 	Container coredns ready: true, restart count 0
Jan 26 14:25:59.330: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 26 14:25:59.330: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 26 14:25:59.330: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 26 14:25:59.330: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ed75f9a4dc938a], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:26:00.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2359" for this suite.
Jan 26 14:26:06.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:26:06.542: INFO: namespace sched-pred-2359 deletion completed in 6.168279072s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.323 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:26:06.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-q2dtk in namespace proxy-7459
I0126 14:26:06.826797       8 runners.go:180] Created replication controller with name: proxy-service-q2dtk, namespace: proxy-7459, replica count: 1
I0126 14:26:07.878146       8 runners.go:180] proxy-service-q2dtk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 14:26:08.878653       8 runners.go:180] proxy-service-q2dtk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 14:26:09.878950       8 runners.go:180] proxy-service-q2dtk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 14:26:10.879305       8 runners.go:180] proxy-service-q2dtk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 14:26:11.879622       8 runners.go:180] proxy-service-q2dtk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 14:26:12.879961       8 runners.go:180] proxy-service-q2dtk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0126 14:26:13.881053       8 runners.go:180] proxy-service-q2dtk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0126 14:26:14.881769       8 runners.go:180] proxy-service-q2dtk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0126 14:26:15.882220       8 runners.go:180] proxy-service-q2dtk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0126 14:26:16.882855       8 runners.go:180] proxy-service-q2dtk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0126 14:26:17.883398       8 runners.go:180] proxy-service-q2dtk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0126 14:26:18.884086       8 runners.go:180] proxy-service-q2dtk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0126 14:26:19.884957       8 runners.go:180] proxy-service-q2dtk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0126 14:26:20.885591       8 runners.go:180] proxy-service-q2dtk Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 26 14:26:20.895: INFO: setup took 14.236120559s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 26 14:26:20.933: INFO: (0) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 38.258339ms)
Jan 26 14:26:20.933: INFO: (0) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:1080/proxy/: test<... (200; 38.22984ms)
Jan 26 14:26:20.933: INFO: (0) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:1080/proxy/: ... (200; 38.229389ms)
Jan 26 14:26:20.933: INFO: (0) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq/proxy/: test (200; 38.266336ms)
Jan 26 14:26:20.936: INFO: (0) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname1/proxy/: foo (200; 40.81037ms)
Jan 26 14:26:20.938: INFO: (0) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 42.814819ms)
Jan 26 14:26:20.938: INFO: (0) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 42.968007ms)
Jan 26 14:26:20.938: INFO: (0) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 43.218595ms)
Jan 26 14:26:20.938: INFO: (0) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 43.18008ms)
Jan 26 14:26:20.939: INFO: (0) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 44.318417ms)
Jan 26 14:26:20.940: INFO: (0) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 45.361729ms)
Jan 26 14:26:20.956: INFO: (0) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname1/proxy/: tls baz (200; 60.931358ms)
Jan 26 14:26:20.956: INFO: (0) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:460/proxy/: tls baz (200; 61.147748ms)
Jan 26 14:26:20.956: INFO: (0) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname2/proxy/: tls qux (200; 61.149229ms)
Jan 26 14:26:20.962: INFO: (0) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:462/proxy/: tls qux (200; 67.409922ms)
Jan 26 14:26:20.963: INFO: (0) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: test<... (200; 20.628937ms)
Jan 26 14:26:20.987: INFO: (1) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 21.210668ms)
Jan 26 14:26:20.987: INFO: (1) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname2/proxy/: tls qux (200; 22.469529ms)
Jan 26 14:26:20.991: INFO: (1) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 27.355956ms)
Jan 26 14:26:20.991: INFO: (1) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:460/proxy/: tls baz (200; 26.67946ms)
Jan 26 14:26:20.992: INFO: (1) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname1/proxy/: foo (200; 26.039677ms)
Jan 26 14:26:20.992: INFO: (1) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname1/proxy/: tls baz (200; 26.808648ms)
Jan 26 14:26:20.992: INFO: (1) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 25.97057ms)
Jan 26 14:26:20.995: INFO: (1) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq/proxy/: test (200; 30.598999ms)
Jan 26 14:26:20.995: INFO: (1) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 30.137031ms)
Jan 26 14:26:20.996: INFO: (1) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:1080/proxy/: ... (200; 33.21609ms)
Jan 26 14:26:20.997: INFO: (1) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 33.352069ms)
Jan 26 14:26:20.997: INFO: (1) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 31.398933ms)
Jan 26 14:26:20.997: INFO: (1) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 31.523347ms)
Jan 26 14:26:20.998: INFO: (1) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: ... (200; 18.334599ms)
Jan 26 14:26:21.018: INFO: (2) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 18.55462ms)
Jan 26 14:26:21.019: INFO: (2) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:462/proxy/: tls qux (200; 19.062996ms)
Jan 26 14:26:21.019: INFO: (2) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:460/proxy/: tls baz (200; 19.295369ms)
Jan 26 14:26:21.022: INFO: (2) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 22.188305ms)
Jan 26 14:26:21.023: INFO: (2) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 22.839292ms)
Jan 26 14:26:21.023: INFO: (2) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname1/proxy/: tls baz (200; 22.974863ms)
Jan 26 14:26:21.023: INFO: (2) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 23.01712ms)
Jan 26 14:26:21.023: INFO: (2) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 23.255419ms)
Jan 26 14:26:21.023: INFO: (2) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq/proxy/: test (200; 23.56469ms)
Jan 26 14:26:21.023: INFO: (2) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:1080/proxy/: test<... (200; 23.354968ms)
Jan 26 14:26:21.023: INFO: (2) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: test<... (200; 18.214258ms)
Jan 26 14:26:21.043: INFO: (3) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 18.462262ms)
Jan 26 14:26:21.043: INFO: (3) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq/proxy/: test (200; 18.072794ms)
Jan 26 14:26:21.043: INFO: (3) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname1/proxy/: foo (200; 19.065935ms)
Jan 26 14:26:21.044: INFO: (3) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 18.652953ms)
Jan 26 14:26:21.045: INFO: (3) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 20.257548ms)
Jan 26 14:26:21.045: INFO: (3) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname2/proxy/: tls qux (200; 20.262803ms)
Jan 26 14:26:21.045: INFO: (3) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: ... (200; 20.542449ms)
Jan 26 14:26:21.045: INFO: (3) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 20.6469ms)
Jan 26 14:26:21.059: INFO: (4) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 13.11255ms)
Jan 26 14:26:21.059: INFO: (4) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:462/proxy/: tls qux (200; 13.086308ms)
Jan 26 14:26:21.059: INFO: (4) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:1080/proxy/: ... (200; 13.639511ms)
Jan 26 14:26:21.059: INFO: (4) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:1080/proxy/: test<... (200; 13.58666ms)
Jan 26 14:26:21.059: INFO: (4) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: test (200; 14.481514ms)
Jan 26 14:26:21.060: INFO: (4) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 14.513351ms)
Jan 26 14:26:21.063: INFO: (4) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname1/proxy/: foo (200; 17.3109ms)
Jan 26 14:26:21.063: INFO: (4) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 17.29879ms)
Jan 26 14:26:21.064: INFO: (4) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 18.543564ms)
Jan 26 14:26:21.065: INFO: (4) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 19.296065ms)
Jan 26 14:26:21.065: INFO: (4) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname1/proxy/: tls baz (200; 19.285646ms)
Jan 26 14:26:21.065: INFO: (4) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname2/proxy/: tls qux (200; 19.777214ms)
Jan 26 14:26:21.074: INFO: (5) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 8.678066ms)
Jan 26 14:26:21.074: INFO: (5) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 8.875469ms)
Jan 26 14:26:21.075: INFO: (5) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:1080/proxy/: test<... (200; 9.298049ms)
Jan 26 14:26:21.075: INFO: (5) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:1080/proxy/: ... (200; 9.759162ms)
Jan 26 14:26:21.075: INFO: (5) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq/proxy/: test (200; 9.915612ms)
Jan 26 14:26:21.076: INFO: (5) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:460/proxy/: tls baz (200; 10.181674ms)
Jan 26 14:26:21.076: INFO: (5) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: ... (200; 7.760875ms)
Jan 26 14:26:21.090: INFO: (6) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: test<... (200; 10.006674ms)
Jan 26 14:26:21.092: INFO: (6) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq/proxy/: test (200; 10.164341ms)
Jan 26 14:26:21.092: INFO: (6) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 10.461933ms)
Jan 26 14:26:21.092: INFO: (6) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 10.551864ms)
Jan 26 14:26:21.094: INFO: (6) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 11.811411ms)
Jan 26 14:26:21.094: INFO: (6) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 12.202572ms)
Jan 26 14:26:21.096: INFO: (6) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname1/proxy/: tls baz (200; 13.78945ms)
Jan 26 14:26:21.097: INFO: (6) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname2/proxy/: tls qux (200; 14.81472ms)
Jan 26 14:26:21.097: INFO: (6) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname1/proxy/: foo (200; 14.80682ms)
Jan 26 14:26:21.097: INFO: (6) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 15.035781ms)
Jan 26 14:26:21.097: INFO: (6) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 15.331828ms)
Jan 26 14:26:21.105: INFO: (7) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:1080/proxy/: ... (200; 7.922168ms)
Jan 26 14:26:21.105: INFO: (7) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 7.959985ms)
Jan 26 14:26:21.106: INFO: (7) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 8.502526ms)
Jan 26 14:26:21.106: INFO: (7) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 8.43198ms)
Jan 26 14:26:21.106: INFO: (7) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: test (200; 9.25919ms)
Jan 26 14:26:21.107: INFO: (7) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:1080/proxy/: test<... (200; 9.747756ms)
Jan 26 14:26:21.107: INFO: (7) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 10.09165ms)
Jan 26 14:26:21.108: INFO: (7) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname1/proxy/: foo (200; 10.851714ms)
Jan 26 14:26:21.108: INFO: (7) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 11.197698ms)
Jan 26 14:26:21.110: INFO: (7) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname1/proxy/: tls baz (200; 12.330967ms)
Jan 26 14:26:21.111: INFO: (7) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname2/proxy/: tls qux (200; 13.308303ms)
Jan 26 14:26:21.111: INFO: (7) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 13.526995ms)
Jan 26 14:26:21.114: INFO: (8) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 2.860108ms)
Jan 26 14:26:21.114: INFO: (8) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq/proxy/: test (200; 3.028867ms)
Jan 26 14:26:21.119: INFO: (8) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 8.462494ms)
Jan 26 14:26:21.119: INFO: (8) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:462/proxy/: tls qux (200; 8.481705ms)
Jan 26 14:26:21.119: INFO: (8) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname1/proxy/: tls baz (200; 8.460299ms)
Jan 26 14:26:21.120: INFO: (8) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname1/proxy/: foo (200; 9.082979ms)
Jan 26 14:26:21.120: INFO: (8) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 9.248435ms)
Jan 26 14:26:21.120: INFO: (8) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname2/proxy/: tls qux (200; 9.452264ms)
Jan 26 14:26:21.120: INFO: (8) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 9.585406ms)
Jan 26 14:26:21.121: INFO: (8) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 9.61959ms)
Jan 26 14:26:21.121: INFO: (8) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:460/proxy/: tls baz (200; 9.682786ms)
Jan 26 14:26:21.121: INFO: (8) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:1080/proxy/: test<... (200; 9.613683ms)
Jan 26 14:26:21.121: INFO: (8) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 10.070485ms)
Jan 26 14:26:21.121: INFO: (8) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: ... (200; 10.11081ms)
Jan 26 14:26:21.121: INFO: (8) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 10.151652ms)
Jan 26 14:26:21.126: INFO: (9) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 5.065976ms)
Jan 26 14:26:21.127: INFO: (9) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:1080/proxy/: ... (200; 5.546506ms)
Jan 26 14:26:21.127: INFO: (9) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname1/proxy/: tls baz (200; 5.762394ms)
Jan 26 14:26:21.127: INFO: (9) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 5.790862ms)
Jan 26 14:26:21.129: INFO: (9) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: test<... (200; 11.640171ms)
Jan 26 14:26:21.133: INFO: (9) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 12.037911ms)
Jan 26 14:26:21.133: INFO: (9) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:462/proxy/: tls qux (200; 11.935421ms)
Jan 26 14:26:21.133: INFO: (9) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:460/proxy/: tls baz (200; 12.015773ms)
Jan 26 14:26:21.133: INFO: (9) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq/proxy/: test (200; 12.272506ms)
Jan 26 14:26:21.136: INFO: (9) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 14.271775ms)
Jan 26 14:26:21.136: INFO: (9) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 14.420861ms)
Jan 26 14:26:21.136: INFO: (9) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname1/proxy/: foo (200; 14.724381ms)
Jan 26 14:26:21.136: INFO: (9) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 15.069601ms)
Jan 26 14:26:21.136: INFO: (9) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname2/proxy/: tls qux (200; 15.113563ms)
Jan 26 14:26:21.142: INFO: (10) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 5.50738ms)
Jan 26 14:26:21.142: INFO: (10) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:1080/proxy/: ... (200; 5.354528ms)
Jan 26 14:26:21.142: INFO: (10) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:462/proxy/: tls qux (200; 5.654232ms)
Jan 26 14:26:21.142: INFO: (10) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:1080/proxy/: test<... (200; 5.715311ms)
Jan 26 14:26:21.143: INFO: (10) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 6.371097ms)
Jan 26 14:26:21.143: INFO: (10) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 6.753114ms)
Jan 26 14:26:21.144: INFO: (10) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq/proxy/: test (200; 7.664163ms)
Jan 26 14:26:21.145: INFO: (10) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 7.935704ms)
Jan 26 14:26:21.145: INFO: (10) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:460/proxy/: tls baz (200; 8.463368ms)
Jan 26 14:26:21.146: INFO: (10) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 9.659967ms)
Jan 26 14:26:21.147: INFO: (10) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname1/proxy/: tls baz (200; 10.490903ms)
Jan 26 14:26:21.147: INFO: (10) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname2/proxy/: tls qux (200; 10.810271ms)
Jan 26 14:26:21.147: INFO: (10) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 10.839694ms)
Jan 26 14:26:21.148: INFO: (10) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 11.135324ms)
Jan 26 14:26:21.149: INFO: (10) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname1/proxy/: foo (200; 12.512552ms)
Jan 26 14:26:21.150: INFO: (10) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: test (200; 10.842967ms)
Jan 26 14:26:21.164: INFO: (11) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname2/proxy/: tls qux (200; 14.027407ms)
Jan 26 14:26:21.165: INFO: (11) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 14.865365ms)
Jan 26 14:26:21.166: INFO: (11) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 16.006445ms)
Jan 26 14:26:21.166: INFO: (11) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 16.069222ms)
Jan 26 14:26:21.166: INFO: (11) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:1080/proxy/: test<... (200; 16.102184ms)
Jan 26 14:26:21.166: INFO: (11) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: ... (200; 16.331329ms)
Jan 26 14:26:21.167: INFO: (11) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 16.498583ms)
Jan 26 14:26:21.175: INFO: (12) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:1080/proxy/: test<... (200; 8.391406ms)
Jan 26 14:26:21.175: INFO: (12) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq/proxy/: test (200; 8.394661ms)
Jan 26 14:26:21.176: INFO: (12) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 8.850926ms)
Jan 26 14:26:21.176: INFO: (12) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:462/proxy/: tls qux (200; 9.104474ms)
Jan 26 14:26:21.176: INFO: (12) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:460/proxy/: tls baz (200; 9.244709ms)
Jan 26 14:26:21.178: INFO: (12) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 11.183563ms)
Jan 26 14:26:21.178: INFO: (12) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname1/proxy/: tls baz (200; 11.189712ms)
Jan 26 14:26:21.178: INFO: (12) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: ... (200; 12.107234ms)
Jan 26 14:26:21.179: INFO: (12) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 12.367882ms)
Jan 26 14:26:21.180: INFO: (12) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname1/proxy/: foo (200; 13.095829ms)
Jan 26 14:26:21.180: INFO: (12) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 13.179839ms)
Jan 26 14:26:21.180: INFO: (12) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 13.430939ms)
Jan 26 14:26:21.181: INFO: (12) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 14.375094ms)
Jan 26 14:26:21.182: INFO: (12) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname2/proxy/: tls qux (200; 15.525419ms)
Jan 26 14:26:21.193: INFO: (13) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: ... (200; 10.524105ms)
Jan 26 14:26:21.194: INFO: (13) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 10.705534ms)
Jan 26 14:26:21.196: INFO: (13) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:460/proxy/: tls baz (200; 13.061505ms)
Jan 26 14:26:21.196: INFO: (13) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:1080/proxy/: test<... (200; 12.936392ms)
Jan 26 14:26:21.196: INFO: (13) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 12.977442ms)
Jan 26 14:26:21.196: INFO: (13) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq/proxy/: test (200; 13.326671ms)
Jan 26 14:26:21.196: INFO: (13) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:462/proxy/: tls qux (200; 13.055278ms)
Jan 26 14:26:21.196: INFO: (13) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 12.794327ms)
Jan 26 14:26:21.196: INFO: (13) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname1/proxy/: tls baz (200; 13.014688ms)
Jan 26 14:26:21.196: INFO: (13) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 13.189516ms)
Jan 26 14:26:21.196: INFO: (13) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 12.864024ms)
Jan 26 14:26:21.197: INFO: (13) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 13.797291ms)
Jan 26 14:26:21.197: INFO: (13) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname1/proxy/: foo (200; 14.000548ms)
Jan 26 14:26:21.197: INFO: (13) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname2/proxy/: tls qux (200; 13.76008ms)
Jan 26 14:26:21.201: INFO: (14) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:460/proxy/: tls baz (200; 3.506102ms)
Jan 26 14:26:21.201: INFO: (14) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:462/proxy/: tls qux (200; 3.511151ms)
Jan 26 14:26:21.201: INFO: (14) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 3.814437ms)
Jan 26 14:26:21.203: INFO: (14) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 6.174812ms)
Jan 26 14:26:21.204: INFO: (14) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: test (200; 6.769838ms)
Jan 26 14:26:21.204: INFO: (14) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 6.944585ms)
Jan 26 14:26:21.206: INFO: (14) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname2/proxy/: tls qux (200; 8.783881ms)
Jan 26 14:26:21.206: INFO: (14) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname1/proxy/: tls baz (200; 9.189477ms)
Jan 26 14:26:21.206: INFO: (14) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname1/proxy/: foo (200; 9.347962ms)
Jan 26 14:26:21.206: INFO: (14) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 9.329326ms)
Jan 26 14:26:21.209: INFO: (14) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 11.434409ms)
Jan 26 14:26:21.209: INFO: (14) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 11.601742ms)
Jan 26 14:26:21.209: INFO: (14) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:1080/proxy/: ... (200; 11.709323ms)
Jan 26 14:26:21.209: INFO: (14) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:1080/proxy/: test<... (200; 12.076431ms)
Jan 26 14:26:21.220: INFO: (15) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:1080/proxy/: ... (200; 10.438928ms)
Jan 26 14:26:21.220: INFO: (15) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 10.280252ms)
Jan 26 14:26:21.220: INFO: (15) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 10.992152ms)
Jan 26 14:26:21.221: INFO: (15) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 11.49957ms)
Jan 26 14:26:21.221: INFO: (15) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 11.972726ms)
Jan 26 14:26:21.221: INFO: (15) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 12.090806ms)
Jan 26 14:26:21.221: INFO: (15) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 12.202763ms)
Jan 26 14:26:21.221: INFO: (15) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq/proxy/: test (200; 12.275148ms)
Jan 26 14:26:21.222: INFO: (15) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:1080/proxy/: test<... (200; 12.297296ms)
Jan 26 14:26:21.222: INFO: (15) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:460/proxy/: tls baz (200; 12.140657ms)
Jan 26 14:26:21.222: INFO: (15) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: test<... (200; 11.257407ms)
Jan 26 14:26:21.234: INFO: (16) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname1/proxy/: foo (200; 11.514569ms)
Jan 26 14:26:21.234: INFO: (16) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 11.529095ms)
Jan 26 14:26:21.234: INFO: (16) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:462/proxy/: tls qux (200; 12.295992ms)
Jan 26 14:26:21.234: INFO: (16) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:460/proxy/: tls baz (200; 12.30995ms)
Jan 26 14:26:21.234: INFO: (16) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:1080/proxy/: ... (200; 12.101455ms)
Jan 26 14:26:21.234: INFO: (16) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq/proxy/: test (200; 12.3549ms)
Jan 26 14:26:21.234: INFO: (16) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 12.167384ms)
Jan 26 14:26:21.235: INFO: (16) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 12.169024ms)
Jan 26 14:26:21.242: INFO: (17) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 7.808291ms)
Jan 26 14:26:21.242: INFO: (17) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:1080/proxy/: ... (200; 7.73872ms)
Jan 26 14:26:21.243: INFO: (17) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq/proxy/: test (200; 7.842743ms)
Jan 26 14:26:21.243: INFO: (17) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 7.949109ms)
Jan 26 14:26:21.243: INFO: (17) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 7.969963ms)
Jan 26 14:26:21.243: INFO: (17) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:462/proxy/: tls qux (200; 8.132161ms)
Jan 26 14:26:21.243: INFO: (17) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 8.144188ms)
Jan 26 14:26:21.243: INFO: (17) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:460/proxy/: tls baz (200; 7.985746ms)
Jan 26 14:26:21.243: INFO: (17) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:1080/proxy/: test<... (200; 8.160646ms)
Jan 26 14:26:21.243: INFO: (17) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: test (200; 6.563145ms)
Jan 26 14:26:21.255: INFO: (18) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 6.777049ms)
Jan 26 14:26:21.256: INFO: (18) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:1080/proxy/: ... (200; 6.983973ms)
Jan 26 14:26:21.256: INFO: (18) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 7.454081ms)
Jan 26 14:26:21.257: INFO: (18) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:460/proxy/: tls baz (200; 8.215917ms)
Jan 26 14:26:21.257: INFO: (18) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 8.313251ms)
Jan 26 14:26:21.257: INFO: (18) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 8.285552ms)
Jan 26 14:26:21.257: INFO: (18) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: test<... (200; 8.716091ms)
Jan 26 14:26:21.259: INFO: (18) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname1/proxy/: foo (200; 10.322693ms)
Jan 26 14:26:21.259: INFO: (18) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 10.551125ms)
Jan 26 14:26:21.259: INFO: (18) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 10.695924ms)
Jan 26 14:26:21.260: INFO: (18) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname1/proxy/: tls baz (200; 11.561547ms)
Jan 26 14:26:21.260: INFO: (18) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname2/proxy/: tls qux (200; 11.592074ms)
Jan 26 14:26:21.260: INFO: (18) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 11.745671ms)
Jan 26 14:26:21.266: INFO: (19) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:1080/proxy/: ... (200; 5.68591ms)
Jan 26 14:26:21.267: INFO: (19) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:1080/proxy/: test<... (200; 5.811629ms)
Jan 26 14:26:21.267: INFO: (19) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:160/proxy/: foo (200; 5.983754ms)
Jan 26 14:26:21.267: INFO: (19) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:443/proxy/: test (200; 6.295643ms)
Jan 26 14:26:21.267: INFO: (19) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:462/proxy/: tls qux (200; 6.469169ms)
Jan 26 14:26:21.267: INFO: (19) /api/v1/namespaces/proxy-7459/pods/http:proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 6.474825ms)
Jan 26 14:26:21.267: INFO: (19) /api/v1/namespaces/proxy-7459/pods/proxy-service-q2dtk-kmvmq:162/proxy/: bar (200; 6.624777ms)
Jan 26 14:26:21.267: INFO: (19) /api/v1/namespaces/proxy-7459/pods/https:proxy-service-q2dtk-kmvmq:460/proxy/: tls baz (200; 6.511707ms)
Jan 26 14:26:21.268: INFO: (19) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname2/proxy/: bar (200; 7.738838ms)
Jan 26 14:26:21.271: INFO: (19) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname2/proxy/: bar (200; 10.726036ms)
Jan 26 14:26:21.271: INFO: (19) /api/v1/namespaces/proxy-7459/services/http:proxy-service-q2dtk:portname1/proxy/: foo (200; 10.617805ms)
Jan 26 14:26:21.271: INFO: (19) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname2/proxy/: tls qux (200; 10.861161ms)
Jan 26 14:26:21.272: INFO: (19) /api/v1/namespaces/proxy-7459/services/https:proxy-service-q2dtk:tlsportname1/proxy/: tls baz (200; 10.723164ms)
Jan 26 14:26:21.272: INFO: (19) /api/v1/namespaces/proxy-7459/services/proxy-service-q2dtk:portname1/proxy/: foo (200; 11.009267ms)
STEP: deleting ReplicationController proxy-service-q2dtk in namespace proxy-7459, will wait for the garbage collector to delete the pods
Jan 26 14:26:21.332: INFO: Deleting ReplicationController proxy-service-q2dtk took: 7.850418ms
Jan 26 14:26:21.632: INFO: Terminating ReplicationController proxy-service-q2dtk pods took: 300.541268ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:26:36.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7459" for this suite.
Jan 26 14:26:42.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:26:42.704: INFO: namespace proxy-7459 deletion completed in 6.13721761s

• [SLOW TEST:36.162 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:26:42.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-bda9dcd9-32f5-47c5-b772-ea26cee61029
STEP: Creating a pod to test consume secrets
Jan 26 14:26:42.828: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-666a92f2-026b-468c-98d9-d2174dd860ac" in namespace "projected-2431" to be "success or failure"
Jan 26 14:26:42.833: INFO: Pod "pod-projected-secrets-666a92f2-026b-468c-98d9-d2174dd860ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292701ms
Jan 26 14:26:44.841: INFO: Pod "pod-projected-secrets-666a92f2-026b-468c-98d9-d2174dd860ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012777586s
Jan 26 14:26:46.881: INFO: Pod "pod-projected-secrets-666a92f2-026b-468c-98d9-d2174dd860ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052728976s
Jan 26 14:26:48.890: INFO: Pod "pod-projected-secrets-666a92f2-026b-468c-98d9-d2174dd860ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061784841s
Jan 26 14:26:50.906: INFO: Pod "pod-projected-secrets-666a92f2-026b-468c-98d9-d2174dd860ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077970992s
STEP: Saw pod success
Jan 26 14:26:50.906: INFO: Pod "pod-projected-secrets-666a92f2-026b-468c-98d9-d2174dd860ac" satisfied condition "success or failure"
Jan 26 14:26:50.913: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-666a92f2-026b-468c-98d9-d2174dd860ac container projected-secret-volume-test: 
STEP: delete the pod
Jan 26 14:26:51.029: INFO: Waiting for pod pod-projected-secrets-666a92f2-026b-468c-98d9-d2174dd860ac to disappear
Jan 26 14:26:51.052: INFO: Pod pod-projected-secrets-666a92f2-026b-468c-98d9-d2174dd860ac no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:26:51.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2431" for this suite.
Jan 26 14:26:57.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:26:57.198: INFO: namespace projected-2431 deletion completed in 6.139077706s

• [SLOW TEST:14.493 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:26:57.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 26 14:26:57.360: INFO: Waiting up to 5m0s for pod "pod-91b5ed19-7b1d-487f-ba67-36edd6d0bd7b" in namespace "emptydir-982" to be "success or failure"
Jan 26 14:26:57.366: INFO: Pod "pod-91b5ed19-7b1d-487f-ba67-36edd6d0bd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.626224ms
Jan 26 14:26:59.374: INFO: Pod "pod-91b5ed19-7b1d-487f-ba67-36edd6d0bd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013824879s
Jan 26 14:27:01.380: INFO: Pod "pod-91b5ed19-7b1d-487f-ba67-36edd6d0bd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020315558s
Jan 26 14:27:03.389: INFO: Pod "pod-91b5ed19-7b1d-487f-ba67-36edd6d0bd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029125304s
Jan 26 14:27:05.410: INFO: Pod "pod-91b5ed19-7b1d-487f-ba67-36edd6d0bd7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050296987s
STEP: Saw pod success
Jan 26 14:27:05.410: INFO: Pod "pod-91b5ed19-7b1d-487f-ba67-36edd6d0bd7b" satisfied condition "success or failure"
Jan 26 14:27:05.421: INFO: Trying to get logs from node iruya-node pod pod-91b5ed19-7b1d-487f-ba67-36edd6d0bd7b container test-container: 
STEP: delete the pod
Jan 26 14:27:05.551: INFO: Waiting for pod pod-91b5ed19-7b1d-487f-ba67-36edd6d0bd7b to disappear
Jan 26 14:27:05.575: INFO: Pod pod-91b5ed19-7b1d-487f-ba67-36edd6d0bd7b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:27:05.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-982" for this suite.
Jan 26 14:27:11.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:27:11.770: INFO: namespace emptydir-982 deletion completed in 6.184451602s

• [SLOW TEST:14.570 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:27:11.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 26 14:27:11.833: INFO: PodSpec: initContainers in spec.initContainers
Jan 26 14:28:06.885: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-591c77a5-78b6-4339-8889-02376f939016", GenerateName:"", Namespace:"init-container-2810", SelfLink:"/api/v1/namespaces/init-container-2810/pods/pod-init-591c77a5-78b6-4339-8889-02376f939016", UID:"5aa32c9d-2166-4614-92a4-7fe834dceba9", ResourceVersion:"21947957", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715645631, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"833801534"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4bwnj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002c5d340), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4bwnj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4bwnj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4bwnj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002b72cc8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002673800), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b72d50)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b72d70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002b72d78), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002b72d7c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715645632, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715645632, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715645632, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715645631, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc001f9a820), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001bcde30)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001bcdea0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://fc4095ba3f86e130385788988041f6f1f6595b848d8bae876619610f8f0ee9c7"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f9a860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f9a840), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:28:06.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2810" for this suite.
Jan 26 14:28:21.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:28:21.196: INFO: namespace init-container-2810 deletion completed in 14.295207384s

• [SLOW TEST:69.425 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:28:21.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 26 14:28:21.393: INFO: Waiting up to 5m0s for pod "pod-c4e0a8db-c9ca-4e72-8edc-3e3959e3d0ec" in namespace "emptydir-4680" to be "success or failure"
Jan 26 14:28:21.412: INFO: Pod "pod-c4e0a8db-c9ca-4e72-8edc-3e3959e3d0ec": Phase="Pending", Reason="", readiness=false. Elapsed: 18.798094ms
Jan 26 14:28:23.419: INFO: Pod "pod-c4e0a8db-c9ca-4e72-8edc-3e3959e3d0ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026144407s
Jan 26 14:28:25.436: INFO: Pod "pod-c4e0a8db-c9ca-4e72-8edc-3e3959e3d0ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043602388s
Jan 26 14:28:27.510: INFO: Pod "pod-c4e0a8db-c9ca-4e72-8edc-3e3959e3d0ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117583389s
Jan 26 14:28:29.516: INFO: Pod "pod-c4e0a8db-c9ca-4e72-8edc-3e3959e3d0ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.123332657s
STEP: Saw pod success
Jan 26 14:28:29.516: INFO: Pod "pod-c4e0a8db-c9ca-4e72-8edc-3e3959e3d0ec" satisfied condition "success or failure"
Jan 26 14:28:29.525: INFO: Trying to get logs from node iruya-node pod pod-c4e0a8db-c9ca-4e72-8edc-3e3959e3d0ec container test-container: 
STEP: delete the pod
Jan 26 14:28:29.657: INFO: Waiting for pod pod-c4e0a8db-c9ca-4e72-8edc-3e3959e3d0ec to disappear
Jan 26 14:28:29.672: INFO: Pod pod-c4e0a8db-c9ca-4e72-8edc-3e3959e3d0ec no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:28:29.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4680" for this suite.
Jan 26 14:28:35.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:28:35.847: INFO: namespace emptydir-4680 deletion completed in 6.152674887s

• [SLOW TEST:14.650 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:28:35.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 26 14:28:35.991: INFO: Waiting up to 5m0s for pod "pod-3610fb76-0e4b-4896-b63b-23a2bd119456" in namespace "emptydir-7340" to be "success or failure"
Jan 26 14:28:35.995: INFO: Pod "pod-3610fb76-0e4b-4896-b63b-23a2bd119456": Phase="Pending", Reason="", readiness=false. Elapsed: 3.955136ms
Jan 26 14:28:38.003: INFO: Pod "pod-3610fb76-0e4b-4896-b63b-23a2bd119456": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012020942s
Jan 26 14:28:40.011: INFO: Pod "pod-3610fb76-0e4b-4896-b63b-23a2bd119456": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01984211s
Jan 26 14:28:42.027: INFO: Pod "pod-3610fb76-0e4b-4896-b63b-23a2bd119456": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036140425s
Jan 26 14:28:44.071: INFO: Pod "pod-3610fb76-0e4b-4896-b63b-23a2bd119456": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079834829s
STEP: Saw pod success
Jan 26 14:28:44.071: INFO: Pod "pod-3610fb76-0e4b-4896-b63b-23a2bd119456" satisfied condition "success or failure"
Jan 26 14:28:44.089: INFO: Trying to get logs from node iruya-node pod pod-3610fb76-0e4b-4896-b63b-23a2bd119456 container test-container: 
STEP: delete the pod
Jan 26 14:28:44.248: INFO: Waiting for pod pod-3610fb76-0e4b-4896-b63b-23a2bd119456 to disappear
Jan 26 14:28:44.257: INFO: Pod pod-3610fb76-0e4b-4896-b63b-23a2bd119456 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:28:44.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7340" for this suite.
Jan 26 14:28:50.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:28:50.498: INFO: namespace emptydir-7340 deletion completed in 6.224105059s

• [SLOW TEST:14.651 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:28:50.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-9a9d6453-aedd-4539-bf3e-3d9af23eb876
Jan 26 14:28:51.107: INFO: Pod name my-hostname-basic-9a9d6453-aedd-4539-bf3e-3d9af23eb876: Found 0 pods out of 1
Jan 26 14:28:56.118: INFO: Pod name my-hostname-basic-9a9d6453-aedd-4539-bf3e-3d9af23eb876: Found 1 pods out of 1
Jan 26 14:28:56.119: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-9a9d6453-aedd-4539-bf3e-3d9af23eb876" are running
Jan 26 14:29:00.134: INFO: Pod "my-hostname-basic-9a9d6453-aedd-4539-bf3e-3d9af23eb876-4gxmc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 14:28:51 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 14:28:51 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9a9d6453-aedd-4539-bf3e-3d9af23eb876]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 14:28:51 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9a9d6453-aedd-4539-bf3e-3d9af23eb876]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 14:28:51 +0000 UTC Reason: Message:}])
Jan 26 14:29:00.134: INFO: Trying to dial the pod
Jan 26 14:29:05.185: INFO: Controller my-hostname-basic-9a9d6453-aedd-4539-bf3e-3d9af23eb876: Got expected result from replica 1 [my-hostname-basic-9a9d6453-aedd-4539-bf3e-3d9af23eb876-4gxmc]: "my-hostname-basic-9a9d6453-aedd-4539-bf3e-3d9af23eb876-4gxmc", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:29:05.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4994" for this suite.
Jan 26 14:29:11.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:29:11.410: INFO: namespace replication-controller-4994 deletion completed in 6.212948215s

• [SLOW TEST:20.911 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:29:11.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-eb5d38ea-7161-4e26-9269-669d57552de5
STEP: Creating secret with name s-test-opt-upd-0854f508-d211-4d28-9478-6511788c333d
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-eb5d38ea-7161-4e26-9269-669d57552de5
STEP: Updating secret s-test-opt-upd-0854f508-d211-4d28-9478-6511788c333d
STEP: Creating secret with name s-test-opt-create-a54edbf9-e5f2-4895-aa05-b1c7e0af27f8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:29:30.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8466" for this suite.
Jan 26 14:29:52.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:29:52.158: INFO: namespace secrets-8466 deletion completed in 22.152957531s

• [SLOW TEST:40.747 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:29:52.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 26 14:29:52.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 26 14:29:52.448: INFO: stderr: ""
Jan 26 14:29:52.448: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:29:52.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6749" for this suite.
Jan 26 14:29:58.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:29:58.634: INFO: namespace kubectl-6749 deletion completed in 6.171199601s

• [SLOW TEST:6.476 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:29:58.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 26 14:29:58.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4052'
Jan 26 14:29:59.005: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 26 14:29:59.005: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan 26 14:29:59.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-4052'
Jan 26 14:29:59.236: INFO: stderr: ""
Jan 26 14:29:59.237: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:29:59.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4052" for this suite.
Jan 26 14:30:21.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:30:21.364: INFO: namespace kubectl-4052 deletion completed in 22.116879598s

• [SLOW TEST:22.728 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:30:21.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 26 14:30:21.494: INFO: Waiting up to 5m0s for pod "downward-api-abaae0b2-78ff-499e-93d1-43e98ee21a89" in namespace "downward-api-7950" to be "success or failure"
Jan 26 14:30:21.517: INFO: Pod "downward-api-abaae0b2-78ff-499e-93d1-43e98ee21a89": Phase="Pending", Reason="", readiness=false. Elapsed: 23.383885ms
Jan 26 14:30:23.528: INFO: Pod "downward-api-abaae0b2-78ff-499e-93d1-43e98ee21a89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034247194s
Jan 26 14:30:25.545: INFO: Pod "downward-api-abaae0b2-78ff-499e-93d1-43e98ee21a89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051045387s
Jan 26 14:30:27.558: INFO: Pod "downward-api-abaae0b2-78ff-499e-93d1-43e98ee21a89": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06426017s
Jan 26 14:30:29.568: INFO: Pod "downward-api-abaae0b2-78ff-499e-93d1-43e98ee21a89": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07424285s
Jan 26 14:30:31.579: INFO: Pod "downward-api-abaae0b2-78ff-499e-93d1-43e98ee21a89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085081472s
STEP: Saw pod success
Jan 26 14:30:31.579: INFO: Pod "downward-api-abaae0b2-78ff-499e-93d1-43e98ee21a89" satisfied condition "success or failure"
Jan 26 14:30:31.584: INFO: Trying to get logs from node iruya-node pod downward-api-abaae0b2-78ff-499e-93d1-43e98ee21a89 container dapi-container: 
STEP: delete the pod
Jan 26 14:30:31.637: INFO: Waiting for pod downward-api-abaae0b2-78ff-499e-93d1-43e98ee21a89 to disappear
Jan 26 14:30:31.645: INFO: Pod downward-api-abaae0b2-78ff-499e-93d1-43e98ee21a89 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:30:31.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7950" for this suite.
Jan 26 14:30:37.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:30:37.910: INFO: namespace downward-api-7950 deletion completed in 6.21107181s

• [SLOW TEST:16.546 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:30:37.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 26 14:30:44.897: INFO: 0 pods remaining
Jan 26 14:30:44.897: INFO: 0 pods has nil DeletionTimestamp
Jan 26 14:30:44.897: INFO: 
STEP: Gathering metrics
W0126 14:30:45.824769       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 26 14:30:45.824: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:30:45.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2727" for this suite.
Jan 26 14:30:57.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:30:58.020: INFO: namespace gc-2727 deletion completed in 12.190751396s

• [SLOW TEST:20.110 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:30:58.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 26 14:30:58.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df529db5-8b57-4841-978b-0eaab17ecab7" in namespace "downward-api-4263" to be "success or failure"
Jan 26 14:30:58.149: INFO: Pod "downwardapi-volume-df529db5-8b57-4841-978b-0eaab17ecab7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.258691ms
Jan 26 14:31:00.157: INFO: Pod "downwardapi-volume-df529db5-8b57-4841-978b-0eaab17ecab7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011133898s
Jan 26 14:31:02.169: INFO: Pod "downwardapi-volume-df529db5-8b57-4841-978b-0eaab17ecab7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023196788s
Jan 26 14:31:04.177: INFO: Pod "downwardapi-volume-df529db5-8b57-4841-978b-0eaab17ecab7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030701615s
Jan 26 14:31:06.313: INFO: Pod "downwardapi-volume-df529db5-8b57-4841-978b-0eaab17ecab7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.167040948s
STEP: Saw pod success
Jan 26 14:31:06.313: INFO: Pod "downwardapi-volume-df529db5-8b57-4841-978b-0eaab17ecab7" satisfied condition "success or failure"
Jan 26 14:31:06.323: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-df529db5-8b57-4841-978b-0eaab17ecab7 container client-container: 
STEP: delete the pod
Jan 26 14:31:06.465: INFO: Waiting for pod downwardapi-volume-df529db5-8b57-4841-978b-0eaab17ecab7 to disappear
Jan 26 14:31:06.472: INFO: Pod downwardapi-volume-df529db5-8b57-4841-978b-0eaab17ecab7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:31:06.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4263" for this suite.
Jan 26 14:31:12.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:31:12.652: INFO: namespace downward-api-4263 deletion completed in 6.174972144s

• [SLOW TEST:14.631 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:31:12.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 26 14:31:12.750: INFO: Waiting up to 5m0s for pod "downwardapi-volume-691d94e2-c24d-4934-9de6-9804387a97ef" in namespace "projected-2292" to be "success or failure"
Jan 26 14:31:12.778: INFO: Pod "downwardapi-volume-691d94e2-c24d-4934-9de6-9804387a97ef": Phase="Pending", Reason="", readiness=false. Elapsed: 28.203129ms
Jan 26 14:31:14.816: INFO: Pod "downwardapi-volume-691d94e2-c24d-4934-9de6-9804387a97ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065998211s
Jan 26 14:31:16.852: INFO: Pod "downwardapi-volume-691d94e2-c24d-4934-9de6-9804387a97ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101902824s
Jan 26 14:31:18.920: INFO: Pod "downwardapi-volume-691d94e2-c24d-4934-9de6-9804387a97ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169477301s
Jan 26 14:31:20.930: INFO: Pod "downwardapi-volume-691d94e2-c24d-4934-9de6-9804387a97ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.180039232s
STEP: Saw pod success
Jan 26 14:31:20.930: INFO: Pod "downwardapi-volume-691d94e2-c24d-4934-9de6-9804387a97ef" satisfied condition "success or failure"
Jan 26 14:31:20.935: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-691d94e2-c24d-4934-9de6-9804387a97ef container client-container: 
STEP: delete the pod
Jan 26 14:31:21.121: INFO: Waiting for pod downwardapi-volume-691d94e2-c24d-4934-9de6-9804387a97ef to disappear
Jan 26 14:31:21.127: INFO: Pod downwardapi-volume-691d94e2-c24d-4934-9de6-9804387a97ef no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:31:21.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2292" for this suite.
Jan 26 14:31:27.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:31:28.034: INFO: namespace projected-2292 deletion completed in 6.900208436s

• [SLOW TEST:15.382 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:31:28.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 26 14:34:26.374: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:26.417: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:28.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:28.427: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:30.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:30.425: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:32.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:32.428: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:34.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:34.447: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:36.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:36.426: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:38.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:38.428: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:40.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:40.426: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:42.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:42.428: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:44.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:44.472: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:46.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:46.641: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:48.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:48.436: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:50.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:50.425: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:52.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:52.440: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:54.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:54.428: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:56.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:56.424: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:34:58.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:34:58.427: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:35:00.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:35:00.426: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 26 14:35:02.417: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 26 14:35:02.427: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:35:02.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8941" for this suite.
Jan 26 14:35:24.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:35:24.582: INFO: namespace container-lifecycle-hook-8941 deletion completed in 22.141800995s

• [SLOW TEST:236.549 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:35:24.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 26 14:35:32.828: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:35:32.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7734" for this suite.
Jan 26 14:35:38.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:35:39.173: INFO: namespace container-runtime-7734 deletion completed in 6.273976421s

• [SLOW TEST:14.589 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:35:39.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 26 14:35:39.301: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1442,SelfLink:/api/v1/namespaces/watch-1442/configmaps/e2e-watch-test-watch-closed,UID:48392e46-253a-4928-a11a-490d60963c38,ResourceVersion:21948978,Generation:0,CreationTimestamp:2020-01-26 14:35:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 26 14:35:39.301: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1442,SelfLink:/api/v1/namespaces/watch-1442/configmaps/e2e-watch-test-watch-closed,UID:48392e46-253a-4928-a11a-490d60963c38,ResourceVersion:21948979,Generation:0,CreationTimestamp:2020-01-26 14:35:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 26 14:35:39.343: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1442,SelfLink:/api/v1/namespaces/watch-1442/configmaps/e2e-watch-test-watch-closed,UID:48392e46-253a-4928-a11a-490d60963c38,ResourceVersion:21948980,Generation:0,CreationTimestamp:2020-01-26 14:35:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 26 14:35:39.343: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-1442,SelfLink:/api/v1/namespaces/watch-1442/configmaps/e2e-watch-test-watch-closed,UID:48392e46-253a-4928-a11a-490d60963c38,ResourceVersion:21948981,Generation:0,CreationTimestamp:2020-01-26 14:35:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:35:39.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1442" for this suite.
Jan 26 14:35:45.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:35:45.519: INFO: namespace watch-1442 deletion completed in 6.163636262s

• [SLOW TEST:6.345 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:35:45.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 26 14:35:45.652: INFO: Waiting up to 5m0s for pod "downward-api-6d24e4b0-2d76-403d-a8fb-3926e7e9bb6d" in namespace "downward-api-5905" to be "success or failure"
Jan 26 14:35:45.657: INFO: Pod "downward-api-6d24e4b0-2d76-403d-a8fb-3926e7e9bb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.308069ms
Jan 26 14:35:47.665: INFO: Pod "downward-api-6d24e4b0-2d76-403d-a8fb-3926e7e9bb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01378529s
Jan 26 14:35:49.672: INFO: Pod "downward-api-6d24e4b0-2d76-403d-a8fb-3926e7e9bb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019967356s
Jan 26 14:35:51.680: INFO: Pod "downward-api-6d24e4b0-2d76-403d-a8fb-3926e7e9bb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028663459s
Jan 26 14:35:53.688: INFO: Pod "downward-api-6d24e4b0-2d76-403d-a8fb-3926e7e9bb6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036001333s
STEP: Saw pod success
Jan 26 14:35:53.688: INFO: Pod "downward-api-6d24e4b0-2d76-403d-a8fb-3926e7e9bb6d" satisfied condition "success or failure"
Jan 26 14:35:53.692: INFO: Trying to get logs from node iruya-node pod downward-api-6d24e4b0-2d76-403d-a8fb-3926e7e9bb6d container dapi-container: 
STEP: delete the pod
Jan 26 14:35:53.823: INFO: Waiting for pod downward-api-6d24e4b0-2d76-403d-a8fb-3926e7e9bb6d to disappear
Jan 26 14:35:53.840: INFO: Pod downward-api-6d24e4b0-2d76-403d-a8fb-3926e7e9bb6d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:35:53.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5905" for this suite.
Jan 26 14:35:59.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:36:00.019: INFO: namespace downward-api-5905 deletion completed in 6.165085284s

• [SLOW TEST:14.500 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:36:00.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 26 14:36:00.119: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df20f2c1-b69e-4134-88d8-785a3a9948e0" in namespace "projected-1020" to be "success or failure"
Jan 26 14:36:00.147: INFO: Pod "downwardapi-volume-df20f2c1-b69e-4134-88d8-785a3a9948e0": Phase="Pending", Reason="", readiness=false. Elapsed: 27.46957ms
Jan 26 14:36:02.158: INFO: Pod "downwardapi-volume-df20f2c1-b69e-4134-88d8-785a3a9948e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038248104s
Jan 26 14:36:04.171: INFO: Pod "downwardapi-volume-df20f2c1-b69e-4134-88d8-785a3a9948e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051481176s
Jan 26 14:36:06.179: INFO: Pod "downwardapi-volume-df20f2c1-b69e-4134-88d8-785a3a9948e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05967335s
Jan 26 14:36:08.212: INFO: Pod "downwardapi-volume-df20f2c1-b69e-4134-88d8-785a3a9948e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092686864s
STEP: Saw pod success
Jan 26 14:36:08.212: INFO: Pod "downwardapi-volume-df20f2c1-b69e-4134-88d8-785a3a9948e0" satisfied condition "success or failure"
Jan 26 14:36:08.216: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-df20f2c1-b69e-4134-88d8-785a3a9948e0 container client-container: 
STEP: delete the pod
Jan 26 14:36:08.303: INFO: Waiting for pod downwardapi-volume-df20f2c1-b69e-4134-88d8-785a3a9948e0 to disappear
Jan 26 14:36:08.361: INFO: Pod downwardapi-volume-df20f2c1-b69e-4134-88d8-785a3a9948e0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:36:08.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1020" for this suite.
Jan 26 14:36:14.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:36:14.563: INFO: namespace projected-1020 deletion completed in 6.194890976s

• [SLOW TEST:14.544 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:36:14.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 26 14:36:14.743: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 26 14:36:14.761: INFO: Number of nodes with available pods: 0
Jan 26 14:36:14.761: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 26 14:36:14.839: INFO: Number of nodes with available pods: 0
Jan 26 14:36:14.839: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:15.857: INFO: Number of nodes with available pods: 0
Jan 26 14:36:15.857: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:16.929: INFO: Number of nodes with available pods: 0
Jan 26 14:36:16.929: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:17.855: INFO: Number of nodes with available pods: 0
Jan 26 14:36:17.855: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:18.845: INFO: Number of nodes with available pods: 0
Jan 26 14:36:18.845: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:19.850: INFO: Number of nodes with available pods: 0
Jan 26 14:36:19.850: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:20.845: INFO: Number of nodes with available pods: 0
Jan 26 14:36:20.845: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:21.887: INFO: Number of nodes with available pods: 0
Jan 26 14:36:21.887: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:22.893: INFO: Number of nodes with available pods: 1
Jan 26 14:36:22.893: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 26 14:36:22.932: INFO: Number of nodes with available pods: 1
Jan 26 14:36:22.932: INFO: Number of running nodes: 0, number of available pods: 1
Jan 26 14:36:23.943: INFO: Number of nodes with available pods: 0
Jan 26 14:36:23.943: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 26 14:36:23.993: INFO: Number of nodes with available pods: 0
Jan 26 14:36:23.993: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:25.004: INFO: Number of nodes with available pods: 0
Jan 26 14:36:25.004: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:26.000: INFO: Number of nodes with available pods: 0
Jan 26 14:36:26.000: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:27.001: INFO: Number of nodes with available pods: 0
Jan 26 14:36:27.002: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:28.017: INFO: Number of nodes with available pods: 0
Jan 26 14:36:28.017: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:29.050: INFO: Number of nodes with available pods: 0
Jan 26 14:36:29.050: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:29.999: INFO: Number of nodes with available pods: 0
Jan 26 14:36:29.999: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:30.999: INFO: Number of nodes with available pods: 0
Jan 26 14:36:30.999: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:32.000: INFO: Number of nodes with available pods: 0
Jan 26 14:36:32.000: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:33.004: INFO: Number of nodes with available pods: 0
Jan 26 14:36:33.004: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:34.009: INFO: Number of nodes with available pods: 0
Jan 26 14:36:34.009: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:35.003: INFO: Number of nodes with available pods: 0
Jan 26 14:36:35.003: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:36.008: INFO: Number of nodes with available pods: 0
Jan 26 14:36:36.009: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:37.006: INFO: Number of nodes with available pods: 0
Jan 26 14:36:37.006: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:38.003: INFO: Number of nodes with available pods: 0
Jan 26 14:36:38.003: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:38.998: INFO: Number of nodes with available pods: 0
Jan 26 14:36:38.998: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:39.999: INFO: Number of nodes with available pods: 0
Jan 26 14:36:39.999: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:41.000: INFO: Number of nodes with available pods: 0
Jan 26 14:36:41.000: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:42.022: INFO: Number of nodes with available pods: 0
Jan 26 14:36:42.023: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:43.040: INFO: Number of nodes with available pods: 0
Jan 26 14:36:43.040: INFO: Node iruya-node is running more than one daemon pod
Jan 26 14:36:44.024: INFO: Number of nodes with available pods: 1
Jan 26 14:36:44.025: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-99, will wait for the garbage collector to delete the pods
Jan 26 14:36:44.112: INFO: Deleting DaemonSet.extensions daemon-set took: 21.418315ms
Jan 26 14:36:44.413: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.766439ms
Jan 26 14:36:51.320: INFO: Number of nodes with available pods: 0
Jan 26 14:36:51.320: INFO: Number of running nodes: 0, number of available pods: 0
Jan 26 14:36:51.325: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-99/daemonsets","resourceVersion":"21949183"},"items":null}

Jan 26 14:36:51.328: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-99/pods","resourceVersion":"21949183"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:36:51.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-99" for this suite.
Jan 26 14:36:57.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:36:57.625: INFO: namespace daemonsets-99 deletion completed in 6.225765932s

• [SLOW TEST:43.061 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:36:57.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 26 14:36:57.797: INFO: Waiting up to 5m0s for pod "pod-e8b46a41-104e-4d1b-8060-712d86735e3b" in namespace "emptydir-7621" to be "success or failure"
Jan 26 14:36:57.813: INFO: Pod "pod-e8b46a41-104e-4d1b-8060-712d86735e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.996785ms
Jan 26 14:36:59.866: INFO: Pod "pod-e8b46a41-104e-4d1b-8060-712d86735e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068495311s
Jan 26 14:37:01.872: INFO: Pod "pod-e8b46a41-104e-4d1b-8060-712d86735e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074886513s
Jan 26 14:37:03.904: INFO: Pod "pod-e8b46a41-104e-4d1b-8060-712d86735e3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106216711s
Jan 26 14:37:05.954: INFO: Pod "pod-e8b46a41-104e-4d1b-8060-712d86735e3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.156415722s
STEP: Saw pod success
Jan 26 14:37:05.954: INFO: Pod "pod-e8b46a41-104e-4d1b-8060-712d86735e3b" satisfied condition "success or failure"
Jan 26 14:37:05.966: INFO: Trying to get logs from node iruya-node pod pod-e8b46a41-104e-4d1b-8060-712d86735e3b container test-container: 
STEP: delete the pod
Jan 26 14:37:06.033: INFO: Waiting for pod pod-e8b46a41-104e-4d1b-8060-712d86735e3b to disappear
Jan 26 14:37:06.119: INFO: Pod pod-e8b46a41-104e-4d1b-8060-712d86735e3b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:37:06.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7621" for this suite.
Jan 26 14:37:12.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:37:12.243: INFO: namespace emptydir-7621 deletion completed in 6.112175037s

• [SLOW TEST:14.617 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:37:12.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 26 14:37:12.347: INFO: Waiting up to 5m0s for pod "pod-e487bc6a-8c1f-44ef-aec0-153e2261bc1f" in namespace "emptydir-3289" to be "success or failure"
Jan 26 14:37:12.399: INFO: Pod "pod-e487bc6a-8c1f-44ef-aec0-153e2261bc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 51.841152ms
Jan 26 14:37:14.411: INFO: Pod "pod-e487bc6a-8c1f-44ef-aec0-153e2261bc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064366959s
Jan 26 14:37:16.420: INFO: Pod "pod-e487bc6a-8c1f-44ef-aec0-153e2261bc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073318954s
Jan 26 14:37:18.428: INFO: Pod "pod-e487bc6a-8c1f-44ef-aec0-153e2261bc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081512425s
Jan 26 14:37:20.885: INFO: Pod "pod-e487bc6a-8c1f-44ef-aec0-153e2261bc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53780657s
Jan 26 14:37:22.894: INFO: Pod "pod-e487bc6a-8c1f-44ef-aec0-153e2261bc1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.547626112s
STEP: Saw pod success
Jan 26 14:37:22.894: INFO: Pod "pod-e487bc6a-8c1f-44ef-aec0-153e2261bc1f" satisfied condition "success or failure"
Jan 26 14:37:22.900: INFO: Trying to get logs from node iruya-node pod pod-e487bc6a-8c1f-44ef-aec0-153e2261bc1f container test-container: 
STEP: delete the pod
Jan 26 14:37:22.958: INFO: Waiting for pod pod-e487bc6a-8c1f-44ef-aec0-153e2261bc1f to disappear
Jan 26 14:37:22.968: INFO: Pod pod-e487bc6a-8c1f-44ef-aec0-153e2261bc1f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:37:22.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3289" for this suite.
Jan 26 14:37:28.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:37:29.175: INFO: namespace emptydir-3289 deletion completed in 6.197957237s

• [SLOW TEST:16.932 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:37:29.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 26 14:37:29.252: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbc89c83-dbb7-46eb-8fbd-4362f11f2a85" in namespace "projected-9616" to be "success or failure"
Jan 26 14:37:29.327: INFO: Pod "downwardapi-volume-bbc89c83-dbb7-46eb-8fbd-4362f11f2a85": Phase="Pending", Reason="", readiness=false. Elapsed: 74.302532ms
Jan 26 14:37:31.335: INFO: Pod "downwardapi-volume-bbc89c83-dbb7-46eb-8fbd-4362f11f2a85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082196629s
Jan 26 14:37:33.343: INFO: Pod "downwardapi-volume-bbc89c83-dbb7-46eb-8fbd-4362f11f2a85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090700057s
Jan 26 14:37:35.354: INFO: Pod "downwardapi-volume-bbc89c83-dbb7-46eb-8fbd-4362f11f2a85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101238947s
Jan 26 14:37:37.363: INFO: Pod "downwardapi-volume-bbc89c83-dbb7-46eb-8fbd-4362f11f2a85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.110781901s
STEP: Saw pod success
Jan 26 14:37:37.363: INFO: Pod "downwardapi-volume-bbc89c83-dbb7-46eb-8fbd-4362f11f2a85" satisfied condition "success or failure"
Jan 26 14:37:37.367: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bbc89c83-dbb7-46eb-8fbd-4362f11f2a85 container client-container: 
STEP: delete the pod
Jan 26 14:37:37.485: INFO: Waiting for pod downwardapi-volume-bbc89c83-dbb7-46eb-8fbd-4362f11f2a85 to disappear
Jan 26 14:37:37.494: INFO: Pod downwardapi-volume-bbc89c83-dbb7-46eb-8fbd-4362f11f2a85 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:37:37.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9616" for this suite.
Jan 26 14:37:43.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:37:43.711: INFO: namespace projected-9616 deletion completed in 6.210203237s

• [SLOW TEST:14.537 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:37:43.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-494
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 26 14:37:43.953: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 26 14:38:22.145: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-494 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 14:38:22.145: INFO: >>> kubeConfig: /root/.kube/config
I0126 14:38:22.227672       8 log.go:172] (0xc00280c210) (0xc002d803c0) Create stream
I0126 14:38:22.227800       8 log.go:172] (0xc00280c210) (0xc002d803c0) Stream added, broadcasting: 1
I0126 14:38:22.236971       8 log.go:172] (0xc00280c210) Reply frame received for 1
I0126 14:38:22.237047       8 log.go:172] (0xc00280c210) (0xc0001fc6e0) Create stream
I0126 14:38:22.237059       8 log.go:172] (0xc00280c210) (0xc0001fc6e0) Stream added, broadcasting: 3
I0126 14:38:22.239764       8 log.go:172] (0xc00280c210) Reply frame received for 3
I0126 14:38:22.239795       8 log.go:172] (0xc00280c210) (0xc002d80460) Create stream
I0126 14:38:22.239842       8 log.go:172] (0xc00280c210) (0xc002d80460) Stream added, broadcasting: 5
I0126 14:38:22.241543       8 log.go:172] (0xc00280c210) Reply frame received for 5
I0126 14:38:22.419681       8 log.go:172] (0xc00280c210) Data frame received for 3
I0126 14:38:22.419734       8 log.go:172] (0xc0001fc6e0) (3) Data frame handling
I0126 14:38:22.419755       8 log.go:172] (0xc0001fc6e0) (3) Data frame sent
I0126 14:38:22.633678       8 log.go:172] (0xc00280c210) Data frame received for 1
I0126 14:38:22.633784       8 log.go:172] (0xc00280c210) (0xc002d80460) Stream removed, broadcasting: 5
I0126 14:38:22.633868       8 log.go:172] (0xc002d803c0) (1) Data frame handling
I0126 14:38:22.633904       8 log.go:172] (0xc002d803c0) (1) Data frame sent
I0126 14:38:22.634189       8 log.go:172] (0xc00280c210) (0xc002d803c0) Stream removed, broadcasting: 1
I0126 14:38:22.634720       8 log.go:172] (0xc00280c210) (0xc0001fc6e0) Stream removed, broadcasting: 3
I0126 14:38:22.634773       8 log.go:172] (0xc00280c210) Go away received
I0126 14:38:22.634850       8 log.go:172] (0xc00280c210) (0xc002d803c0) Stream removed, broadcasting: 1
I0126 14:38:22.634894       8 log.go:172] (0xc00280c210) (0xc0001fc6e0) Stream removed, broadcasting: 3
I0126 14:38:22.634903       8 log.go:172] (0xc00280c210) (0xc002d80460) Stream removed, broadcasting: 5
Jan 26 14:38:22.634: INFO: Found all expected endpoints: [netserver-0]
Jan 26 14:38:22.645: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-494 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 14:38:22.645: INFO: >>> kubeConfig: /root/.kube/config
I0126 14:38:22.695987       8 log.go:172] (0xc001514e70) (0xc002ab7680) Create stream
I0126 14:38:22.696037       8 log.go:172] (0xc001514e70) (0xc002ab7680) Stream added, broadcasting: 1
I0126 14:38:22.701681       8 log.go:172] (0xc001514e70) Reply frame received for 1
I0126 14:38:22.701714       8 log.go:172] (0xc001514e70) (0xc002ab7720) Create stream
I0126 14:38:22.701721       8 log.go:172] (0xc001514e70) (0xc002ab7720) Stream added, broadcasting: 3
I0126 14:38:22.702922       8 log.go:172] (0xc001514e70) Reply frame received for 3
I0126 14:38:22.702949       8 log.go:172] (0xc001514e70) (0xc002ab77c0) Create stream
I0126 14:38:22.702956       8 log.go:172] (0xc001514e70) (0xc002ab77c0) Stream added, broadcasting: 5
I0126 14:38:22.704482       8 log.go:172] (0xc001514e70) Reply frame received for 5
I0126 14:38:22.789099       8 log.go:172] (0xc001514e70) Data frame received for 3
I0126 14:38:22.789122       8 log.go:172] (0xc002ab7720) (3) Data frame handling
I0126 14:38:22.789136       8 log.go:172] (0xc002ab7720) (3) Data frame sent
I0126 14:38:22.958296       8 log.go:172] (0xc001514e70) Data frame received for 1
I0126 14:38:22.958376       8 log.go:172] (0xc001514e70) (0xc002ab7720) Stream removed, broadcasting: 3
I0126 14:38:22.958432       8 log.go:172] (0xc002ab7680) (1) Data frame handling
I0126 14:38:22.958463       8 log.go:172] (0xc002ab7680) (1) Data frame sent
I0126 14:38:22.958530       8 log.go:172] (0xc001514e70) (0xc002ab77c0) Stream removed, broadcasting: 5
I0126 14:38:22.958614       8 log.go:172] (0xc001514e70) (0xc002ab7680) Stream removed, broadcasting: 1
I0126 14:38:22.958949       8 log.go:172] (0xc001514e70) (0xc002ab7680) Stream removed, broadcasting: 1
I0126 14:38:22.958972       8 log.go:172] (0xc001514e70) (0xc002ab7720) Stream removed, broadcasting: 3
I0126 14:38:22.958987       8 log.go:172] (0xc001514e70) (0xc002ab77c0) Stream removed, broadcasting: 5
Jan 26 14:38:22.959: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:38:22.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-494" for this suite.
Jan 26 14:38:47.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:38:47.162: INFO: namespace pod-network-test-494 deletion completed in 24.184244366s

• [SLOW TEST:63.448 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:38:47.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 26 14:39:03.391: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 14:39:03.414: INFO: Pod pod-with-prestop-http-hook still exists
Jan 26 14:39:05.415: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 14:39:05.425: INFO: Pod pod-with-prestop-http-hook still exists
Jan 26 14:39:07.415: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 14:39:07.881: INFO: Pod pod-with-prestop-http-hook still exists
Jan 26 14:39:09.415: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 14:39:09.423: INFO: Pod pod-with-prestop-http-hook still exists
Jan 26 14:39:11.415: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 26 14:39:11.425: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:39:11.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1901" for this suite.
Jan 26 14:39:33.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:39:33.618: INFO: namespace container-lifecycle-hook-1901 deletion completed in 22.16042579s

• [SLOW TEST:46.456 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:39:33.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-lzhz
STEP: Creating a pod to test atomic-volume-subpath
Jan 26 14:39:33.773: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lzhz" in namespace "subpath-6028" to be "success or failure"
Jan 26 14:39:33.782: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.479417ms
Jan 26 14:39:35.793: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019858827s
Jan 26 14:39:37.808: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035196915s
Jan 26 14:39:39.818: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045268187s
Jan 26 14:39:41.832: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0584966s
Jan 26 14:39:43.842: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Running", Reason="", readiness=true. Elapsed: 10.069252952s
Jan 26 14:39:45.857: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Running", Reason="", readiness=true. Elapsed: 12.083952112s
Jan 26 14:39:47.871: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Running", Reason="", readiness=true. Elapsed: 14.098345058s
Jan 26 14:39:49.883: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Running", Reason="", readiness=true. Elapsed: 16.11039357s
Jan 26 14:39:51.904: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Running", Reason="", readiness=true. Elapsed: 18.130597875s
Jan 26 14:39:53.914: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Running", Reason="", readiness=true. Elapsed: 20.141372163s
Jan 26 14:39:55.924: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Running", Reason="", readiness=true. Elapsed: 22.150649647s
Jan 26 14:39:57.941: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Running", Reason="", readiness=true. Elapsed: 24.168083749s
Jan 26 14:39:59.951: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Running", Reason="", readiness=true. Elapsed: 26.177925106s
Jan 26 14:40:01.961: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Running", Reason="", readiness=true. Elapsed: 28.188156665s
Jan 26 14:40:03.977: INFO: Pod "pod-subpath-test-projected-lzhz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.20357016s
STEP: Saw pod success
Jan 26 14:40:03.977: INFO: Pod "pod-subpath-test-projected-lzhz" satisfied condition "success or failure"
Jan 26 14:40:03.984: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-lzhz container test-container-subpath-projected-lzhz: 
STEP: delete the pod
Jan 26 14:40:04.035: INFO: Waiting for pod pod-subpath-test-projected-lzhz to disappear
Jan 26 14:40:04.044: INFO: Pod pod-subpath-test-projected-lzhz no longer exists
STEP: Deleting pod pod-subpath-test-projected-lzhz
Jan 26 14:40:04.044: INFO: Deleting pod "pod-subpath-test-projected-lzhz" in namespace "subpath-6028"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:40:04.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6028" for this suite.
Jan 26 14:40:10.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:40:10.294: INFO: namespace subpath-6028 deletion completed in 6.242659122s

• [SLOW TEST:36.675 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:40:10.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-lnwm
STEP: Creating a pod to test atomic-volume-subpath
Jan 26 14:40:10.432: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lnwm" in namespace "subpath-4798" to be "success or failure"
Jan 26 14:40:10.447: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Pending", Reason="", readiness=false. Elapsed: 15.657643ms
Jan 26 14:40:12.460: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028152803s
Jan 26 14:40:14.471: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039273802s
Jan 26 14:40:16.483: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05120201s
Jan 26 14:40:18.502: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Running", Reason="", readiness=true. Elapsed: 8.06975209s
Jan 26 14:40:20.530: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Running", Reason="", readiness=true. Elapsed: 10.098065251s
Jan 26 14:40:22.546: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Running", Reason="", readiness=true. Elapsed: 12.114206475s
Jan 26 14:40:24.557: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Running", Reason="", readiness=true. Elapsed: 14.125339629s
Jan 26 14:40:26.574: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Running", Reason="", readiness=true. Elapsed: 16.141929201s
Jan 26 14:40:28.591: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Running", Reason="", readiness=true. Elapsed: 18.159552669s
Jan 26 14:40:30.607: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Running", Reason="", readiness=true. Elapsed: 20.175338265s
Jan 26 14:40:32.615: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Running", Reason="", readiness=true. Elapsed: 22.182963402s
Jan 26 14:40:34.621: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Running", Reason="", readiness=true. Elapsed: 24.189355153s
Jan 26 14:40:36.655: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Running", Reason="", readiness=true. Elapsed: 26.223154127s
Jan 26 14:40:38.668: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Running", Reason="", readiness=true. Elapsed: 28.236324651s
Jan 26 14:40:40.690: INFO: Pod "pod-subpath-test-secret-lnwm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.257819969s
STEP: Saw pod success
Jan 26 14:40:40.690: INFO: Pod "pod-subpath-test-secret-lnwm" satisfied condition "success or failure"
Jan 26 14:40:40.694: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-lnwm container test-container-subpath-secret-lnwm: 
STEP: delete the pod
Jan 26 14:40:40.743: INFO: Waiting for pod pod-subpath-test-secret-lnwm to disappear
Jan 26 14:40:40.775: INFO: Pod pod-subpath-test-secret-lnwm no longer exists
STEP: Deleting pod pod-subpath-test-secret-lnwm
Jan 26 14:40:40.776: INFO: Deleting pod "pod-subpath-test-secret-lnwm" in namespace "subpath-4798"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:40:40.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4798" for this suite.
Jan 26 14:40:46.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:40:46.951: INFO: namespace subpath-4798 deletion completed in 6.165791981s

• [SLOW TEST:36.656 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:40:46.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 26 14:40:47.113: INFO: Waiting up to 5m0s for pod "downwardapi-volume-224e325c-66e0-4d93-91c4-f77a2ae4d995" in namespace "downward-api-8152" to be "success or failure"
Jan 26 14:40:47.125: INFO: Pod "downwardapi-volume-224e325c-66e0-4d93-91c4-f77a2ae4d995": Phase="Pending", Reason="", readiness=false. Elapsed: 11.376408ms
Jan 26 14:40:49.139: INFO: Pod "downwardapi-volume-224e325c-66e0-4d93-91c4-f77a2ae4d995": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025520617s
Jan 26 14:40:51.151: INFO: Pod "downwardapi-volume-224e325c-66e0-4d93-91c4-f77a2ae4d995": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037154396s
Jan 26 14:40:53.163: INFO: Pod "downwardapi-volume-224e325c-66e0-4d93-91c4-f77a2ae4d995": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049275016s
Jan 26 14:40:55.175: INFO: Pod "downwardapi-volume-224e325c-66e0-4d93-91c4-f77a2ae4d995": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061092987s
Jan 26 14:40:57.182: INFO: Pod "downwardapi-volume-224e325c-66e0-4d93-91c4-f77a2ae4d995": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068092714s
STEP: Saw pod success
Jan 26 14:40:57.182: INFO: Pod "downwardapi-volume-224e325c-66e0-4d93-91c4-f77a2ae4d995" satisfied condition "success or failure"
Jan 26 14:40:57.184: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-224e325c-66e0-4d93-91c4-f77a2ae4d995 container client-container: 
STEP: delete the pod
Jan 26 14:40:57.218: INFO: Waiting for pod downwardapi-volume-224e325c-66e0-4d93-91c4-f77a2ae4d995 to disappear
Jan 26 14:40:57.222: INFO: Pod downwardapi-volume-224e325c-66e0-4d93-91c4-f77a2ae4d995 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:40:57.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8152" for this suite.
Jan 26 14:41:03.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:41:03.351: INFO: namespace downward-api-8152 deletion completed in 6.125175447s

• [SLOW TEST:16.400 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:41:03.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jan 26 14:41:13.503: INFO: Pod pod-hostip-b5bb2e9c-0fa8-4694-b182-21d6775af0bc has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:41:13.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6529" for this suite.
Jan 26 14:41:35.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:41:35.638: INFO: namespace pods-6529 deletion completed in 22.123854455s

• [SLOW TEST:32.287 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:41:35.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-23250cac-cac7-4f22-950c-654408bc8115
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-23250cac-cac7-4f22-950c-654408bc8115
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:41:46.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2893" for this suite.
Jan 26 14:42:08.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:42:08.243: INFO: namespace configmap-2893 deletion completed in 22.169259819s

• [SLOW TEST:32.605 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:42:08.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 26 14:42:08.423: INFO: Waiting up to 5m0s for pod "downward-api-cccbc58f-39f1-4fe3-baa9-25e520045354" in namespace "downward-api-8108" to be "success or failure"
Jan 26 14:42:08.441: INFO: Pod "downward-api-cccbc58f-39f1-4fe3-baa9-25e520045354": Phase="Pending", Reason="", readiness=false. Elapsed: 18.282225ms
Jan 26 14:42:10.457: INFO: Pod "downward-api-cccbc58f-39f1-4fe3-baa9-25e520045354": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033507588s
Jan 26 14:42:12.475: INFO: Pod "downward-api-cccbc58f-39f1-4fe3-baa9-25e520045354": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052206826s
Jan 26 14:42:14.556: INFO: Pod "downward-api-cccbc58f-39f1-4fe3-baa9-25e520045354": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132954421s
Jan 26 14:42:16.575: INFO: Pod "downward-api-cccbc58f-39f1-4fe3-baa9-25e520045354": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151779073s
Jan 26 14:42:18.589: INFO: Pod "downward-api-cccbc58f-39f1-4fe3-baa9-25e520045354": Phase="Pending", Reason="", readiness=false. Elapsed: 10.166373685s
Jan 26 14:42:20.603: INFO: Pod "downward-api-cccbc58f-39f1-4fe3-baa9-25e520045354": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.180130151s
STEP: Saw pod success
Jan 26 14:42:20.604: INFO: Pod "downward-api-cccbc58f-39f1-4fe3-baa9-25e520045354" satisfied condition "success or failure"
Jan 26 14:42:20.617: INFO: Trying to get logs from node iruya-node pod downward-api-cccbc58f-39f1-4fe3-baa9-25e520045354 container dapi-container: 
STEP: delete the pod
Jan 26 14:42:20.744: INFO: Waiting for pod downward-api-cccbc58f-39f1-4fe3-baa9-25e520045354 to disappear
Jan 26 14:42:20.756: INFO: Pod downward-api-cccbc58f-39f1-4fe3-baa9-25e520045354 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:42:20.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8108" for this suite.
Jan 26 14:42:26.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:42:26.941: INFO: namespace downward-api-8108 deletion completed in 6.178294684s

• [SLOW TEST:18.697 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:42:26.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 26 14:42:27.084: INFO: Creating ReplicaSet my-hostname-basic-efbf000e-c71e-470f-b0aa-580ff66a2417
Jan 26 14:42:27.097: INFO: Pod name my-hostname-basic-efbf000e-c71e-470f-b0aa-580ff66a2417: Found 0 pods out of 1
Jan 26 14:42:32.106: INFO: Pod name my-hostname-basic-efbf000e-c71e-470f-b0aa-580ff66a2417: Found 1 pods out of 1
Jan 26 14:42:32.106: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-efbf000e-c71e-470f-b0aa-580ff66a2417" is running
Jan 26 14:42:36.117: INFO: Pod "my-hostname-basic-efbf000e-c71e-470f-b0aa-580ff66a2417-l4s7b" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 14:42:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 14:42:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-efbf000e-c71e-470f-b0aa-580ff66a2417]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 14:42:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-efbf000e-c71e-470f-b0aa-580ff66a2417]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-26 14:42:27 +0000 UTC Reason: Message:}])
Jan 26 14:42:36.117: INFO: Trying to dial the pod
Jan 26 14:42:41.151: INFO: Controller my-hostname-basic-efbf000e-c71e-470f-b0aa-580ff66a2417: Got expected result from replica 1 [my-hostname-basic-efbf000e-c71e-470f-b0aa-580ff66a2417-l4s7b]: "my-hostname-basic-efbf000e-c71e-470f-b0aa-580ff66a2417-l4s7b", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:42:41.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4140" for this suite.
Jan 26 14:42:47.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:42:47.377: INFO: namespace replicaset-4140 deletion completed in 6.218498297s

• [SLOW TEST:20.435 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:42:47.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 26 14:42:47.603: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3133,SelfLink:/api/v1/namespaces/watch-3133/configmaps/e2e-watch-test-resource-version,UID:8884bc04-db8c-4d2d-ac89-76fb9c046771,ResourceVersion:21950034,Generation:0,CreationTimestamp:2020-01-26 14:42:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 26 14:42:47.603: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3133,SelfLink:/api/v1/namespaces/watch-3133/configmaps/e2e-watch-test-resource-version,UID:8884bc04-db8c-4d2d-ac89-76fb9c046771,ResourceVersion:21950035,Generation:0,CreationTimestamp:2020-01-26 14:42:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:42:47.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3133" for this suite.
Jan 26 14:42:53.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:42:53.792: INFO: namespace watch-3133 deletion completed in 6.173997969s

• [SLOW TEST:6.415 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:42:53.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-4ed67a12-741d-46e2-97a7-7b0a862f83cf
STEP: Creating a pod to test consume secrets
Jan 26 14:42:53.972: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-39b77ad3-8fbb-4d05-b2cb-3fe270ad50e5" in namespace "projected-4632" to be "success or failure"
Jan 26 14:42:53.980: INFO: Pod "pod-projected-secrets-39b77ad3-8fbb-4d05-b2cb-3fe270ad50e5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.652664ms
Jan 26 14:42:55.986: INFO: Pod "pod-projected-secrets-39b77ad3-8fbb-4d05-b2cb-3fe270ad50e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014067746s
Jan 26 14:42:57.997: INFO: Pod "pod-projected-secrets-39b77ad3-8fbb-4d05-b2cb-3fe270ad50e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024612488s
Jan 26 14:43:00.005: INFO: Pod "pod-projected-secrets-39b77ad3-8fbb-4d05-b2cb-3fe270ad50e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032793903s
Jan 26 14:43:02.019: INFO: Pod "pod-projected-secrets-39b77ad3-8fbb-4d05-b2cb-3fe270ad50e5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047134969s
Jan 26 14:43:04.031: INFO: Pod "pod-projected-secrets-39b77ad3-8fbb-4d05-b2cb-3fe270ad50e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058991211s
STEP: Saw pod success
Jan 26 14:43:04.031: INFO: Pod "pod-projected-secrets-39b77ad3-8fbb-4d05-b2cb-3fe270ad50e5" satisfied condition "success or failure"
Jan 26 14:43:04.036: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-39b77ad3-8fbb-4d05-b2cb-3fe270ad50e5 container projected-secret-volume-test: 
STEP: delete the pod
Jan 26 14:43:04.112: INFO: Waiting for pod pod-projected-secrets-39b77ad3-8fbb-4d05-b2cb-3fe270ad50e5 to disappear
Jan 26 14:43:04.119: INFO: Pod pod-projected-secrets-39b77ad3-8fbb-4d05-b2cb-3fe270ad50e5 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:43:04.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4632" for this suite.
Jan 26 14:43:10.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:43:10.312: INFO: namespace projected-4632 deletion completed in 6.187108744s

• [SLOW TEST:16.519 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:43:10.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:43:18.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2867" for this suite.
Jan 26 14:44:10.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:44:10.680: INFO: namespace kubelet-test-2867 deletion completed in 52.194534937s

• [SLOW TEST:60.367 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:44:10.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan 26 14:44:10.728: INFO: Waiting up to 5m0s for pod "client-containers-04fe0f2d-12b0-431d-89fe-b49e7fa64c8a" in namespace "containers-6626" to be "success or failure"
Jan 26 14:44:10.745: INFO: Pod "client-containers-04fe0f2d-12b0-431d-89fe-b49e7fa64c8a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.464123ms
Jan 26 14:44:12.758: INFO: Pod "client-containers-04fe0f2d-12b0-431d-89fe-b49e7fa64c8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030512441s
Jan 26 14:44:15.220: INFO: Pod "client-containers-04fe0f2d-12b0-431d-89fe-b49e7fa64c8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.492768327s
Jan 26 14:44:17.261: INFO: Pod "client-containers-04fe0f2d-12b0-431d-89fe-b49e7fa64c8a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.533315014s
Jan 26 14:44:19.273: INFO: Pod "client-containers-04fe0f2d-12b0-431d-89fe-b49e7fa64c8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.545296148s
STEP: Saw pod success
Jan 26 14:44:19.273: INFO: Pod "client-containers-04fe0f2d-12b0-431d-89fe-b49e7fa64c8a" satisfied condition "success or failure"
Jan 26 14:44:19.278: INFO: Trying to get logs from node iruya-node pod client-containers-04fe0f2d-12b0-431d-89fe-b49e7fa64c8a container test-container: 
STEP: delete the pod
Jan 26 14:44:19.372: INFO: Waiting for pod client-containers-04fe0f2d-12b0-431d-89fe-b49e7fa64c8a to disappear
Jan 26 14:44:19.378: INFO: Pod client-containers-04fe0f2d-12b0-431d-89fe-b49e7fa64c8a no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:44:19.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6626" for this suite.
Jan 26 14:44:25.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:44:25.578: INFO: namespace containers-6626 deletion completed in 6.194709334s

• [SLOW TEST:14.898 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:44:25.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:44:25.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8866" for this suite.
Jan 26 14:44:31.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:44:31.936: INFO: namespace kubelet-test-8866 deletion completed in 6.185353762s

• [SLOW TEST:6.357 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:44:31.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6765
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-6765
STEP: Creating statefulset with conflicting port in namespace statefulset-6765
STEP: Waiting until pod test-pod will start running in namespace statefulset-6765
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6765
Jan 26 14:44:42.149: INFO: Observed stateful pod in namespace: statefulset-6765, name: ss-0, uid: 5dbd546d-0b5b-436f-9723-0ee8b8e37da1, status phase: Pending. Waiting for statefulset controller to delete.
Jan 26 14:44:46.541: INFO: Observed stateful pod in namespace: statefulset-6765, name: ss-0, uid: 5dbd546d-0b5b-436f-9723-0ee8b8e37da1, status phase: Failed. Waiting for statefulset controller to delete.
Jan 26 14:44:46.656: INFO: Observed stateful pod in namespace: statefulset-6765, name: ss-0, uid: 5dbd546d-0b5b-436f-9723-0ee8b8e37da1, status phase: Failed. Waiting for statefulset controller to delete.
Jan 26 14:44:46.666: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6765
STEP: Removing pod with conflicting port in namespace statefulset-6765
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6765 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 26 14:44:58.801: INFO: Deleting all statefulset in ns statefulset-6765
Jan 26 14:44:58.805: INFO: Scaling statefulset ss to 0
Jan 26 14:45:08.839: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 14:45:08.851: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:45:08.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6765" for this suite.
Jan 26 14:45:14.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:45:15.072: INFO: namespace statefulset-6765 deletion completed in 6.1500162s

• [SLOW TEST:43.135 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:45:15.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 26 14:45:15.135: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:45:33.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8848" for this suite.
Jan 26 14:45:55.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:45:55.472: INFO: namespace init-container-8848 deletion completed in 22.10392411s

• [SLOW TEST:40.399 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:45:55.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jan 26 14:45:55.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 26 14:45:55.845: INFO: stderr: ""
Jan 26 14:45:55.845: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:45:55.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9014" for this suite.
Jan 26 14:46:01.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:46:02.020: INFO: namespace kubectl-9014 deletion completed in 6.163824703s

• [SLOW TEST:6.547 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:46:02.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 26 14:46:20.230: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 14:46:20.245: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 14:46:22.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 14:46:22.255: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 14:46:24.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 14:46:24.417: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 14:46:26.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 14:46:26.260: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 14:46:28.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 14:46:28.255: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 14:46:30.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 14:46:30.256: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 14:46:32.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 14:46:32.256: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 14:46:34.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 14:46:34.252: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 14:46:36.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 14:46:36.252: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 14:46:38.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 14:46:38.298: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 14:46:40.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 14:46:40.252: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 14:46:42.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 14:46:42.311: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 14:46:44.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 14:46:44.253: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 14:46:46.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 14:46:46.286: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 26 14:46:48.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 26 14:46:48.265: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:46:48.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-53" for this suite.
Jan 26 14:47:10.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:47:10.544: INFO: namespace container-lifecycle-hook-53 deletion completed in 22.216668034s

• [SLOW TEST:68.524 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:47:10.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 26 14:47:19.298: INFO: Successfully updated pod "pod-update-activedeadlineseconds-1ef88f13-a710-4b5d-ab1f-59bc328ec496"
Jan 26 14:47:19.298: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-1ef88f13-a710-4b5d-ab1f-59bc328ec496" in namespace "pods-3432" to be "terminated due to deadline exceeded"
Jan 26 14:47:19.363: INFO: Pod "pod-update-activedeadlineseconds-1ef88f13-a710-4b5d-ab1f-59bc328ec496": Phase="Running", Reason="", readiness=true. Elapsed: 64.622427ms
Jan 26 14:47:21.391: INFO: Pod "pod-update-activedeadlineseconds-1ef88f13-a710-4b5d-ab1f-59bc328ec496": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.092239455s
Jan 26 14:47:21.391: INFO: Pod "pod-update-activedeadlineseconds-1ef88f13-a710-4b5d-ab1f-59bc328ec496" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:47:21.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3432" for this suite.
Jan 26 14:47:27.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:47:27.589: INFO: namespace pods-3432 deletion completed in 6.193699194s

• [SLOW TEST:17.044 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:47:27.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:47:36.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7540" for this suite.
Jan 26 14:47:58.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:47:58.864: INFO: namespace replication-controller-7540 deletion completed in 22.115091076s

• [SLOW TEST:31.273 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:47:58.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 26 14:47:58.990: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48a65f49-7f9d-472b-8a1b-6122189e33ca" in namespace "projected-2523" to be "success or failure"
Jan 26 14:47:59.000: INFO: Pod "downwardapi-volume-48a65f49-7f9d-472b-8a1b-6122189e33ca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.405872ms
Jan 26 14:48:01.014: INFO: Pod "downwardapi-volume-48a65f49-7f9d-472b-8a1b-6122189e33ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024069397s
Jan 26 14:48:03.020: INFO: Pod "downwardapi-volume-48a65f49-7f9d-472b-8a1b-6122189e33ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030177166s
Jan 26 14:48:05.033: INFO: Pod "downwardapi-volume-48a65f49-7f9d-472b-8a1b-6122189e33ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042700752s
Jan 26 14:48:07.039: INFO: Pod "downwardapi-volume-48a65f49-7f9d-472b-8a1b-6122189e33ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04915456s
STEP: Saw pod success
Jan 26 14:48:07.039: INFO: Pod "downwardapi-volume-48a65f49-7f9d-472b-8a1b-6122189e33ca" satisfied condition "success or failure"
Jan 26 14:48:07.049: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-48a65f49-7f9d-472b-8a1b-6122189e33ca container client-container: 
STEP: delete the pod
Jan 26 14:48:07.135: INFO: Waiting for pod downwardapi-volume-48a65f49-7f9d-472b-8a1b-6122189e33ca to disappear
Jan 26 14:48:07.141: INFO: Pod downwardapi-volume-48a65f49-7f9d-472b-8a1b-6122189e33ca no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:48:07.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2523" for this suite.
Jan 26 14:48:13.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:48:13.250: INFO: namespace projected-2523 deletion completed in 6.099904407s

• [SLOW TEST:14.386 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:48:13.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 26 14:48:13.341: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2a939cf-7c01-417f-8d29-8017c9dcb434" in namespace "downward-api-5829" to be "success or failure"
Jan 26 14:48:13.476: INFO: Pod "downwardapi-volume-a2a939cf-7c01-417f-8d29-8017c9dcb434": Phase="Pending", Reason="", readiness=false. Elapsed: 134.156697ms
Jan 26 14:48:15.486: INFO: Pod "downwardapi-volume-a2a939cf-7c01-417f-8d29-8017c9dcb434": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144687744s
Jan 26 14:48:17.534: INFO: Pod "downwardapi-volume-a2a939cf-7c01-417f-8d29-8017c9dcb434": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191968982s
Jan 26 14:48:19.542: INFO: Pod "downwardapi-volume-a2a939cf-7c01-417f-8d29-8017c9dcb434": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20071639s
Jan 26 14:48:21.552: INFO: Pod "downwardapi-volume-a2a939cf-7c01-417f-8d29-8017c9dcb434": Phase="Pending", Reason="", readiness=false. Elapsed: 8.210505203s
Jan 26 14:48:23.568: INFO: Pod "downwardapi-volume-a2a939cf-7c01-417f-8d29-8017c9dcb434": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.226317133s
STEP: Saw pod success
Jan 26 14:48:23.568: INFO: Pod "downwardapi-volume-a2a939cf-7c01-417f-8d29-8017c9dcb434" satisfied condition "success or failure"
Jan 26 14:48:23.584: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a2a939cf-7c01-417f-8d29-8017c9dcb434 container client-container: 
STEP: delete the pod
Jan 26 14:48:23.749: INFO: Waiting for pod downwardapi-volume-a2a939cf-7c01-417f-8d29-8017c9dcb434 to disappear
Jan 26 14:48:23.755: INFO: Pod downwardapi-volume-a2a939cf-7c01-417f-8d29-8017c9dcb434 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:48:23.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5829" for this suite.
Jan 26 14:48:29.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:48:29.964: INFO: namespace downward-api-5829 deletion completed in 6.202702062s

• [SLOW TEST:16.713 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:48:29.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 26 14:48:40.643: INFO: Successfully updated pod "pod-update-38fbdc47-c992-4f41-afb5-631c00889919"
STEP: verifying the updated pod is in kubernetes
Jan 26 14:48:40.656: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:48:40.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1726" for this suite.
Jan 26 14:49:02.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:49:02.805: INFO: namespace pods-1726 deletion completed in 22.144790482s

• [SLOW TEST:32.841 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:49:02.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 26 14:49:11.572: INFO: Successfully updated pod "labelsupdate5c23713c-6bc7-4c59-850e-7da0b8c8f778"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:49:13.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1699" for this suite.
Jan 26 14:49:35.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:49:35.831: INFO: namespace downward-api-1699 deletion completed in 22.139256575s

• [SLOW TEST:33.026 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:49:35.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-d9461228-051b-4aa1-9843-1ed42a8aa932
STEP: Creating configMap with name cm-test-opt-upd-7a3de531-814f-4d76-9966-26a7d685684e
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-d9461228-051b-4aa1-9843-1ed42a8aa932
STEP: Updating configmap cm-test-opt-upd-7a3de531-814f-4d76-9966-26a7d685684e
STEP: Creating configMap with name cm-test-opt-create-2b522e82-91a3-4aaf-87bc-eba46350596a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:49:52.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9139" for this suite.
Jan 26 14:50:14.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:50:14.631: INFO: namespace configmap-9139 deletion completed in 22.218333439s

• [SLOW TEST:38.799 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:50:14.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-1350
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-1350
STEP: Deleting pre-stop pod
Jan 26 14:50:35.958: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:50:35.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-1350" for this suite.
Jan 26 14:51:18.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:51:18.137: INFO: namespace prestop-1350 deletion completed in 42.154431232s

• [SLOW TEST:63.506 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:51:18.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 26 14:51:18.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-2239'
Jan 26 14:51:20.581: INFO: stderr: ""
Jan 26 14:51:20.581: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 26 14:51:30.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-2239 -o json'
Jan 26 14:51:30.824: INFO: stderr: ""
Jan 26 14:51:30.824: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-26T14:51:20Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-2239\",\n        \"resourceVersion\": \"21951308\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-2239/pods/e2e-test-nginx-pod\",\n        \"uid\": \"2ba9ac9e-c4ae-426d-b738-311e4363ee0a\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-bq8xg\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-bq8xg\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-bq8xg\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-26T14:51:21Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-26T14:51:28Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-26T14:51:28Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-26T14:51:20Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://8ae6cafb2de089ce9efddc353d2a0fd21beea450b5a5487619da938dfe3cbe9b\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-26T14:51:27Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-26T14:51:21Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 26 14:51:30.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2239'
Jan 26 14:51:31.376: INFO: stderr: ""
Jan 26 14:51:31.376: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jan 26 14:51:31.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2239'
Jan 26 14:51:38.971: INFO: stderr: ""
Jan 26 14:51:38.971: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:51:38.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2239" for this suite.
Jan 26 14:51:45.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:51:45.133: INFO: namespace kubectl-2239 deletion completed in 6.157146794s

• [SLOW TEST:26.996 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:51:45.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 26 14:51:45.297: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.985119ms)
Jan 26 14:51:45.302: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.272565ms)
Jan 26 14:51:45.309: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.126193ms)
Jan 26 14:51:45.315: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.172466ms)
Jan 26 14:51:45.320: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.543207ms)
Jan 26 14:51:45.326: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.557815ms)
Jan 26 14:51:45.333: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.022618ms)
Jan 26 14:51:45.339: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.910217ms)
Jan 26 14:51:45.345: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.757003ms)
Jan 26 14:51:45.350: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.073656ms)
Jan 26 14:51:45.359: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.48451ms)
Jan 26 14:51:45.367: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.7456ms)
Jan 26 14:51:45.375: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.498783ms)
Jan 26 14:51:45.383: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.577975ms)
Jan 26 14:51:45.390: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.647982ms)
Jan 26 14:51:45.395: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.064295ms)
Jan 26 14:51:45.401: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.125204ms)
Jan 26 14:51:45.408: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.529207ms)
Jan 26 14:51:45.414: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.220752ms)
Jan 26 14:51:45.422: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.021992ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:51:45.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6701" for this suite.
Jan 26 14:51:51.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:51:51.637: INFO: namespace proxy-6701 deletion completed in 6.208904874s

• [SLOW TEST:6.503 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:51:51.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 26 14:51:51.904: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8357,SelfLink:/api/v1/namespaces/watch-8357/configmaps/e2e-watch-test-label-changed,UID:576dbf52-72ff-41ba-8a4a-65e33deb9ba7,ResourceVersion:21951374,Generation:0,CreationTimestamp:2020-01-26 14:51:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 26 14:51:51.904: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8357,SelfLink:/api/v1/namespaces/watch-8357/configmaps/e2e-watch-test-label-changed,UID:576dbf52-72ff-41ba-8a4a-65e33deb9ba7,ResourceVersion:21951375,Generation:0,CreationTimestamp:2020-01-26 14:51:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 26 14:51:51.904: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8357,SelfLink:/api/v1/namespaces/watch-8357/configmaps/e2e-watch-test-label-changed,UID:576dbf52-72ff-41ba-8a4a-65e33deb9ba7,ResourceVersion:21951376,Generation:0,CreationTimestamp:2020-01-26 14:51:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 26 14:52:01.972: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8357,SelfLink:/api/v1/namespaces/watch-8357/configmaps/e2e-watch-test-label-changed,UID:576dbf52-72ff-41ba-8a4a-65e33deb9ba7,ResourceVersion:21951391,Generation:0,CreationTimestamp:2020-01-26 14:51:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 26 14:52:01.973: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8357,SelfLink:/api/v1/namespaces/watch-8357/configmaps/e2e-watch-test-label-changed,UID:576dbf52-72ff-41ba-8a4a-65e33deb9ba7,ResourceVersion:21951392,Generation:0,CreationTimestamp:2020-01-26 14:51:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 26 14:52:01.973: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8357,SelfLink:/api/v1/namespaces/watch-8357/configmaps/e2e-watch-test-label-changed,UID:576dbf52-72ff-41ba-8a4a-65e33deb9ba7,ResourceVersion:21951393,Generation:0,CreationTimestamp:2020-01-26 14:51:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:52:01.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8357" for this suite.
Jan 26 14:52:08.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:52:08.201: INFO: namespace watch-8357 deletion completed in 6.220025327s

• [SLOW TEST:16.563 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:52:08.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-12bc383b-4dc3-4332-9f73-0d8622f3a522
STEP: Creating a pod to test consume configMaps
Jan 26 14:52:08.380: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-93c5297f-56b9-4743-a9ef-303658673155" in namespace "projected-8005" to be "success or failure"
Jan 26 14:52:08.391: INFO: Pod "pod-projected-configmaps-93c5297f-56b9-4743-a9ef-303658673155": Phase="Pending", Reason="", readiness=false. Elapsed: 10.811829ms
Jan 26 14:52:10.404: INFO: Pod "pod-projected-configmaps-93c5297f-56b9-4743-a9ef-303658673155": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023306453s
Jan 26 14:52:12.512: INFO: Pod "pod-projected-configmaps-93c5297f-56b9-4743-a9ef-303658673155": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131186454s
Jan 26 14:52:14.525: INFO: Pod "pod-projected-configmaps-93c5297f-56b9-4743-a9ef-303658673155": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144574306s
Jan 26 14:52:16.564: INFO: Pod "pod-projected-configmaps-93c5297f-56b9-4743-a9ef-303658673155": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.183302354s
STEP: Saw pod success
Jan 26 14:52:16.564: INFO: Pod "pod-projected-configmaps-93c5297f-56b9-4743-a9ef-303658673155" satisfied condition "success or failure"
Jan 26 14:52:16.571: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-93c5297f-56b9-4743-a9ef-303658673155 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 26 14:52:16.748: INFO: Waiting for pod pod-projected-configmaps-93c5297f-56b9-4743-a9ef-303658673155 to disappear
Jan 26 14:52:16.757: INFO: Pod pod-projected-configmaps-93c5297f-56b9-4743-a9ef-303658673155 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:52:16.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8005" for this suite.
Jan 26 14:52:22.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:52:22.972: INFO: namespace projected-8005 deletion completed in 6.193330505s

• [SLOW TEST:14.770 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:52:22.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-4f542b21-ab98-406e-995d-daa20d3813f8
STEP: Creating a pod to test consume configMaps
Jan 26 14:52:23.090: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a761560-5307-47e1-aa88-9e56658c367a" in namespace "configmap-4943" to be "success or failure"
Jan 26 14:52:23.104: INFO: Pod "pod-configmaps-0a761560-5307-47e1-aa88-9e56658c367a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.429902ms
Jan 26 14:52:25.112: INFO: Pod "pod-configmaps-0a761560-5307-47e1-aa88-9e56658c367a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022069169s
Jan 26 14:52:27.129: INFO: Pod "pod-configmaps-0a761560-5307-47e1-aa88-9e56658c367a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038617844s
Jan 26 14:52:29.181: INFO: Pod "pod-configmaps-0a761560-5307-47e1-aa88-9e56658c367a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091380789s
Jan 26 14:52:31.195: INFO: Pod "pod-configmaps-0a761560-5307-47e1-aa88-9e56658c367a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104645513s
STEP: Saw pod success
Jan 26 14:52:31.195: INFO: Pod "pod-configmaps-0a761560-5307-47e1-aa88-9e56658c367a" satisfied condition "success or failure"
Jan 26 14:52:31.199: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0a761560-5307-47e1-aa88-9e56658c367a container configmap-volume-test: 
STEP: delete the pod
Jan 26 14:52:31.270: INFO: Waiting for pod pod-configmaps-0a761560-5307-47e1-aa88-9e56658c367a to disappear
Jan 26 14:52:31.280: INFO: Pod pod-configmaps-0a761560-5307-47e1-aa88-9e56658c367a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:52:31.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4943" for this suite.
Jan 26 14:52:37.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:52:37.493: INFO: namespace configmap-4943 deletion completed in 6.204452553s

• [SLOW TEST:14.520 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:52:37.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 26 14:52:46.388: INFO: Successfully updated pod "annotationupdate990a4055-e734-4a27-9445-92870693340b"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:52:48.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5451" for this suite.
Jan 26 14:53:12.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:53:12.742: INFO: namespace downward-api-5451 deletion completed in 24.240544833s

• [SLOW TEST:35.249 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:53:12.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3411
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 26 14:53:12.878: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 26 14:53:47.227: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3411 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 14:53:47.227: INFO: >>> kubeConfig: /root/.kube/config
I0126 14:53:47.332663       8 log.go:172] (0xc001009a20) (0xc00237f360) Create stream
I0126 14:53:47.332828       8 log.go:172] (0xc001009a20) (0xc00237f360) Stream added, broadcasting: 1
I0126 14:53:47.351456       8 log.go:172] (0xc001009a20) Reply frame received for 1
I0126 14:53:47.351589       8 log.go:172] (0xc001009a20) (0xc001735680) Create stream
I0126 14:53:47.351620       8 log.go:172] (0xc001009a20) (0xc001735680) Stream added, broadcasting: 3
I0126 14:53:47.355171       8 log.go:172] (0xc001009a20) Reply frame received for 3
I0126 14:53:47.355230       8 log.go:172] (0xc001009a20) (0xc00221ad20) Create stream
I0126 14:53:47.355253       8 log.go:172] (0xc001009a20) (0xc00221ad20) Stream added, broadcasting: 5
I0126 14:53:47.358955       8 log.go:172] (0xc001009a20) Reply frame received for 5
I0126 14:53:48.660213       8 log.go:172] (0xc001009a20) Data frame received for 3
I0126 14:53:48.660371       8 log.go:172] (0xc001735680) (3) Data frame handling
I0126 14:53:48.660415       8 log.go:172] (0xc001735680) (3) Data frame sent
I0126 14:53:48.811329       8 log.go:172] (0xc001009a20) (0xc00221ad20) Stream removed, broadcasting: 5
I0126 14:53:48.811485       8 log.go:172] (0xc001009a20) Data frame received for 1
I0126 14:53:48.811614       8 log.go:172] (0xc001009a20) (0xc001735680) Stream removed, broadcasting: 3
I0126 14:53:48.811707       8 log.go:172] (0xc00237f360) (1) Data frame handling
I0126 14:53:48.811738       8 log.go:172] (0xc00237f360) (1) Data frame sent
I0126 14:53:48.811770       8 log.go:172] (0xc001009a20) (0xc00237f360) Stream removed, broadcasting: 1
I0126 14:53:48.811795       8 log.go:172] (0xc001009a20) Go away received
I0126 14:53:48.812159       8 log.go:172] (0xc001009a20) (0xc00237f360) Stream removed, broadcasting: 1
I0126 14:53:48.812199       8 log.go:172] (0xc001009a20) (0xc001735680) Stream removed, broadcasting: 3
I0126 14:53:48.812211       8 log.go:172] (0xc001009a20) (0xc00221ad20) Stream removed, broadcasting: 5
Jan 26 14:53:48.812: INFO: Found all expected endpoints: [netserver-0]
Jan 26 14:53:48.818: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3411 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 26 14:53:48.818: INFO: >>> kubeConfig: /root/.kube/config
I0126 14:53:48.902360       8 log.go:172] (0xc000653c30) (0xc0017359a0) Create stream
I0126 14:53:48.902610       8 log.go:172] (0xc000653c30) (0xc0017359a0) Stream added, broadcasting: 1
I0126 14:53:48.916913       8 log.go:172] (0xc000653c30) Reply frame received for 1
I0126 14:53:48.917014       8 log.go:172] (0xc000653c30) (0xc0028a52c0) Create stream
I0126 14:53:48.917026       8 log.go:172] (0xc000653c30) (0xc0028a52c0) Stream added, broadcasting: 3
I0126 14:53:48.919597       8 log.go:172] (0xc000653c30) Reply frame received for 3
I0126 14:53:48.919664       8 log.go:172] (0xc000653c30) (0xc00244d220) Create stream
I0126 14:53:48.919684       8 log.go:172] (0xc000653c30) (0xc00244d220) Stream added, broadcasting: 5
I0126 14:53:48.922082       8 log.go:172] (0xc000653c30) Reply frame received for 5
I0126 14:53:50.110826       8 log.go:172] (0xc000653c30) Data frame received for 3
I0126 14:53:50.110930       8 log.go:172] (0xc0028a52c0) (3) Data frame handling
I0126 14:53:50.110989       8 log.go:172] (0xc0028a52c0) (3) Data frame sent
I0126 14:53:50.344360       8 log.go:172] (0xc000653c30) (0xc0028a52c0) Stream removed, broadcasting: 3
I0126 14:53:50.344670       8 log.go:172] (0xc000653c30) Data frame received for 1
I0126 14:53:50.344832       8 log.go:172] (0xc0017359a0) (1) Data frame handling
I0126 14:53:50.344869       8 log.go:172] (0xc000653c30) (0xc00244d220) Stream removed, broadcasting: 5
I0126 14:53:50.344999       8 log.go:172] (0xc0017359a0) (1) Data frame sent
I0126 14:53:50.345045       8 log.go:172] (0xc000653c30) (0xc0017359a0) Stream removed, broadcasting: 1
I0126 14:53:50.345556       8 log.go:172] (0xc000653c30) (0xc0017359a0) Stream removed, broadcasting: 1
I0126 14:53:50.345585       8 log.go:172] (0xc000653c30) (0xc0028a52c0) Stream removed, broadcasting: 3
I0126 14:53:50.345607       8 log.go:172] (0xc000653c30) (0xc00244d220) Stream removed, broadcasting: 5
Jan 26 14:53:50.345: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 14:53:50.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0126 14:53:50.346786       8 log.go:172] (0xc000653c30) Go away received
STEP: Destroying namespace "pod-network-test-3411" for this suite.
Jan 26 14:54:14.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 14:54:14.586: INFO: namespace pod-network-test-3411 deletion completed in 24.219830674s

• [SLOW TEST:61.843 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 14:54:14.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1341
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-1341
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1341
Jan 26 14:54:14.742: INFO: Found 0 stateful pods, waiting for 1
Jan 26 14:54:24.866: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 26 14:54:24.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 14:54:25.472: INFO: stderr: "I0126 14:54:25.098605    3056 log.go:172] (0xc0009fe420) (0xc0004366e0) Create stream\nI0126 14:54:25.098782    3056 log.go:172] (0xc0009fe420) (0xc0004366e0) Stream added, broadcasting: 1\nI0126 14:54:25.119375    3056 log.go:172] (0xc0009fe420) Reply frame received for 1\nI0126 14:54:25.119527    3056 log.go:172] (0xc0009fe420) (0xc0005ba280) Create stream\nI0126 14:54:25.119553    3056 log.go:172] (0xc0009fe420) (0xc0005ba280) Stream added, broadcasting: 3\nI0126 14:54:25.121503    3056 log.go:172] (0xc0009fe420) Reply frame received for 3\nI0126 14:54:25.121548    3056 log.go:172] (0xc0009fe420) (0xc000436000) Create stream\nI0126 14:54:25.121559    3056 log.go:172] (0xc0009fe420) (0xc000436000) Stream added, broadcasting: 5\nI0126 14:54:25.124874    3056 log.go:172] (0xc0009fe420) Reply frame received for 5\nI0126 14:54:25.261419    3056 log.go:172] (0xc0009fe420) Data frame received for 5\nI0126 14:54:25.261504    3056 log.go:172] (0xc000436000) (5) Data frame handling\nI0126 14:54:25.261533    3056 log.go:172] (0xc000436000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0126 14:54:25.315194    3056 log.go:172] (0xc0009fe420) Data frame received for 3\nI0126 14:54:25.315253    3056 log.go:172] (0xc0005ba280) (3) Data frame handling\nI0126 14:54:25.315281    3056 log.go:172] (0xc0005ba280) (3) Data frame sent\nI0126 14:54:25.454692    3056 log.go:172] (0xc0009fe420) Data frame received for 1\nI0126 14:54:25.454887    3056 log.go:172] (0xc0009fe420) (0xc000436000) Stream removed, broadcasting: 5\nI0126 14:54:25.455038    3056 log.go:172] (0xc0004366e0) (1) Data frame handling\nI0126 14:54:25.455089    3056 log.go:172] (0xc0004366e0) (1) Data frame sent\nI0126 14:54:25.455335    3056 log.go:172] (0xc0009fe420) (0xc0005ba280) Stream removed, broadcasting: 3\nI0126 14:54:25.455404    3056 log.go:172] (0xc0009fe420) (0xc0004366e0) Stream removed, broadcasting: 1\nI0126 14:54:25.455450    3056 log.go:172] (0xc0009fe420) Go away received\nI0126 14:54:25.458013    3056 log.go:172] (0xc0009fe420) (0xc0004366e0) Stream removed, broadcasting: 1\nI0126 14:54:25.458051    3056 log.go:172] (0xc0009fe420) (0xc0005ba280) Stream removed, broadcasting: 3\nI0126 14:54:25.458076    3056 log.go:172] (0xc0009fe420) (0xc000436000) Stream removed, broadcasting: 5\n"
Jan 26 14:54:25.473: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 14:54:25.473: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 14:54:25.482: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 26 14:54:35.493: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 14:54:35.494: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 14:54:35.557: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999442s
Jan 26 14:54:36.766: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.959511531s
Jan 26 14:54:37.964: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.750936391s
Jan 26 14:54:38.977: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.552645526s
Jan 26 14:54:39.992: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.538991417s
Jan 26 14:54:41.056: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.524251102s
Jan 26 14:54:42.334: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.460764547s
Jan 26 14:54:43.379: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.181953004s
Jan 26 14:54:44.388: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.136986056s
Jan 26 14:54:45.412: INFO: Verifying statefulset ss doesn't scale past 3 for another 128.343429ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1341
Jan 26 14:54:46.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:54:47.013: INFO: stderr: "I0126 14:54:46.677712    3078 log.go:172] (0xc000966160) (0xc000900640) Create stream\nI0126 14:54:46.678428    3078 log.go:172] (0xc000966160) (0xc000900640) Stream added, broadcasting: 1\nI0126 14:54:46.702782    3078 log.go:172] (0xc000966160) Reply frame received for 1\nI0126 14:54:46.702913    3078 log.go:172] (0xc000966160) (0xc000930000) Create stream\nI0126 14:54:46.702932    3078 log.go:172] (0xc000966160) (0xc000930000) Stream added, broadcasting: 3\nI0126 14:54:46.704790    3078 log.go:172] (0xc000966160) Reply frame received for 3\nI0126 14:54:46.704819    3078 log.go:172] (0xc000966160) (0xc000694280) Create stream\nI0126 14:54:46.704827    3078 log.go:172] (0xc000966160) (0xc000694280) Stream added, broadcasting: 5\nI0126 14:54:46.707693    3078 log.go:172] (0xc000966160) Reply frame received for 5\nI0126 14:54:46.869309    3078 log.go:172] (0xc000966160) Data frame received for 3\nI0126 14:54:46.869398    3078 log.go:172] (0xc000930000) (3) Data frame handling\nI0126 14:54:46.869413    3078 log.go:172] (0xc000930000) (3) Data frame sent\nI0126 14:54:46.869954    3078 log.go:172] (0xc000966160) Data frame received for 5\nI0126 14:54:46.869974    3078 log.go:172] (0xc000694280) (5) Data frame handling\nI0126 14:54:46.869995    3078 log.go:172] (0xc000694280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0126 14:54:47.003126    3078 log.go:172] (0xc000966160) (0xc000930000) Stream removed, broadcasting: 3\nI0126 14:54:47.003267    3078 log.go:172] (0xc000966160) Data frame received for 1\nI0126 14:54:47.003292    3078 log.go:172] (0xc000900640) (1) Data frame handling\nI0126 14:54:47.003312    3078 log.go:172] (0xc000900640) (1) Data frame sent\nI0126 14:54:47.003411    3078 log.go:172] (0xc000966160) (0xc000900640) Stream removed, broadcasting: 1\nI0126 14:54:47.003497    3078 log.go:172] (0xc000966160) (0xc000694280) Stream removed, broadcasting: 5\nI0126 14:54:47.003527    3078 log.go:172] (0xc000966160) Go away received\nI0126 14:54:47.004252    3078 log.go:172] (0xc000966160) (0xc000900640) Stream removed, broadcasting: 1\nI0126 14:54:47.004284    3078 log.go:172] (0xc000966160) (0xc000930000) Stream removed, broadcasting: 3\nI0126 14:54:47.004299    3078 log.go:172] (0xc000966160) (0xc000694280) Stream removed, broadcasting: 5\n"
Jan 26 14:54:47.013: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 26 14:54:47.013: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 26 14:54:47.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:54:47.611: INFO: stderr: "I0126 14:54:47.247191    3098 log.go:172] (0xc0009b00b0) (0xc00099e5a0) Create stream\nI0126 14:54:47.247338    3098 log.go:172] (0xc0009b00b0) (0xc00099e5a0) Stream added, broadcasting: 1\nI0126 14:54:47.250412    3098 log.go:172] (0xc0009b00b0) Reply frame received for 1\nI0126 14:54:47.250441    3098 log.go:172] (0xc0009b00b0) (0xc00082c000) Create stream\nI0126 14:54:47.250447    3098 log.go:172] (0xc0009b00b0) (0xc00082c000) Stream added, broadcasting: 3\nI0126 14:54:47.251335    3098 log.go:172] (0xc0009b00b0) Reply frame received for 3\nI0126 14:54:47.251356    3098 log.go:172] (0xc0009b00b0) (0xc00099e6e0) Create stream\nI0126 14:54:47.251365    3098 log.go:172] (0xc0009b00b0) (0xc00099e6e0) Stream added, broadcasting: 5\nI0126 14:54:47.252672    3098 log.go:172] (0xc0009b00b0) Reply frame received for 5\nI0126 14:54:47.519903    3098 log.go:172] (0xc0009b00b0) Data frame received for 5\nI0126 14:54:47.519951    3098 log.go:172] (0xc00099e6e0) (5) Data frame handling\nI0126 14:54:47.519972    3098 log.go:172] (0xc00099e6e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0126 14:54:47.529937    3098 log.go:172] (0xc0009b00b0) Data frame received for 3\nI0126 14:54:47.529983    3098 log.go:172] (0xc00082c000) (3) Data frame handling\nI0126 14:54:47.529994    3098 log.go:172] (0xc00082c000) (3) Data frame sent\nI0126 14:54:47.530015    3098 log.go:172] (0xc0009b00b0) Data frame received for 5\nI0126 14:54:47.530023    3098 log.go:172] (0xc00099e6e0) (5) Data frame handling\nI0126 14:54:47.530030    3098 log.go:172] (0xc00099e6e0) (5) Data frame sent\nI0126 14:54:47.530037    3098 log.go:172] (0xc0009b00b0) Data frame received for 5\nI0126 14:54:47.530043    3098 log.go:172] (0xc00099e6e0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0126 14:54:47.530061    3098 log.go:172] (0xc00099e6e0) (5) Data frame sent\nI0126 14:54:47.598079    3098 log.go:172] (0xc0009b00b0) Data frame received for 1\nI0126 14:54:47.598149    3098 log.go:172] (0xc00099e5a0) (1) Data frame handling\nI0126 14:54:47.598179    3098 log.go:172] (0xc00099e5a0) (1) Data frame sent\nI0126 14:54:47.598856    3098 log.go:172] (0xc0009b00b0) (0xc00099e5a0) Stream removed, broadcasting: 1\nI0126 14:54:47.600369    3098 log.go:172] (0xc0009b00b0) (0xc00082c000) Stream removed, broadcasting: 3\nI0126 14:54:47.600605    3098 log.go:172] (0xc0009b00b0) (0xc00099e6e0) Stream removed, broadcasting: 5\nI0126 14:54:47.600638    3098 log.go:172] (0xc0009b00b0) Go away received\nI0126 14:54:47.600727    3098 log.go:172] (0xc0009b00b0) (0xc00099e5a0) Stream removed, broadcasting: 1\nI0126 14:54:47.600745    3098 log.go:172] (0xc0009b00b0) (0xc00082c000) Stream removed, broadcasting: 3\nI0126 14:54:47.600758    3098 log.go:172] (0xc0009b00b0) (0xc00099e6e0) Stream removed, broadcasting: 5\n"
Jan 26 14:54:47.611: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 26 14:54:47.611: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 26 14:54:47.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:54:48.115: INFO: stderr: "I0126 14:54:47.840581    3119 log.go:172] (0xc000a58370) (0xc00088e640) Create stream\nI0126 14:54:47.840890    3119 log.go:172] (0xc000a58370) (0xc00088e640) Stream added, broadcasting: 1\nI0126 14:54:47.856164    3119 log.go:172] (0xc000a58370) Reply frame received for 1\nI0126 14:54:47.856255    3119 log.go:172] (0xc000a58370) (0xc00093e000) Create stream\nI0126 14:54:47.856267    3119 log.go:172] (0xc000a58370) (0xc00093e000) Stream added, broadcasting: 3\nI0126 14:54:47.858071    3119 log.go:172] (0xc000a58370) Reply frame received for 3\nI0126 14:54:47.858099    3119 log.go:172] (0xc000a58370) (0xc00040c1e0) Create stream\nI0126 14:54:47.858111    3119 log.go:172] (0xc000a58370) (0xc00040c1e0) Stream added, broadcasting: 5\nI0126 14:54:47.859999    3119 log.go:172] (0xc000a58370) Reply frame received for 5\nI0126 14:54:47.961320    3119 log.go:172] (0xc000a58370) Data frame received for 3\nI0126 14:54:47.961397    3119 log.go:172] (0xc00093e000) (3) Data frame handling\nI0126 14:54:47.961424    3119 log.go:172] (0xc00093e000) (3) Data frame sent\nI0126 14:54:47.961474    3119 log.go:172] (0xc000a58370) Data frame received for 5\nI0126 14:54:47.961486    3119 log.go:172] (0xc00040c1e0) (5) Data frame handling\nI0126 14:54:47.961506    3119 log.go:172] (0xc00040c1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0126 14:54:48.102318    3119 log.go:172] (0xc000a58370) Data frame received for 1\nI0126 14:54:48.102450    3119 log.go:172] (0xc000a58370) (0xc00093e000) Stream removed, broadcasting: 3\nI0126 14:54:48.102590    3119 log.go:172] (0xc00088e640) (1) Data frame handling\nI0126 14:54:48.102628    3119 log.go:172] (0xc00088e640) (1) Data frame sent\nI0126 14:54:48.102636    3119 log.go:172] (0xc000a58370) (0xc00040c1e0) Stream removed, broadcasting: 5\nI0126 14:54:48.102723    3119 log.go:172] (0xc000a58370) (0xc00088e640) Stream removed, broadcasting: 1\nI0126 14:54:48.102752    3119 log.go:172] (0xc000a58370) Go away received\nI0126 14:54:48.104006    3119 log.go:172] (0xc000a58370) (0xc00088e640) Stream removed, broadcasting: 1\nI0126 14:54:48.104043    3119 log.go:172] (0xc000a58370) (0xc00093e000) Stream removed, broadcasting: 3\nI0126 14:54:48.104079    3119 log.go:172] (0xc000a58370) (0xc00040c1e0) Stream removed, broadcasting: 5\n"
Jan 26 14:54:48.116: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 26 14:54:48.116: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 26 14:54:48.304: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 14:54:48.304: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 26 14:54:48.304: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 26 14:54:48.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 14:54:48.805: INFO: stderr: "I0126 14:54:48.521468    3141 log.go:172] (0xc00096e0b0) (0xc00091a140) Create stream\nI0126 14:54:48.521704    3141 log.go:172] (0xc00096e0b0) (0xc00091a140) Stream added, broadcasting: 1\nI0126 14:54:48.536476    3141 log.go:172] (0xc00096e0b0) Reply frame received for 1\nI0126 14:54:48.536570    3141 log.go:172] (0xc00096e0b0) (0xc0008fe000) Create stream\nI0126 14:54:48.536583    3141 log.go:172] (0xc00096e0b0) (0xc0008fe000) Stream added, broadcasting: 3\nI0126 14:54:48.538710    3141 log.go:172] (0xc00096e0b0) Reply frame received for 3\nI0126 14:54:48.538745    3141 log.go:172] (0xc00096e0b0) (0xc00091a1e0) Create stream\nI0126 14:54:48.538754    3141 log.go:172] (0xc00096e0b0) (0xc00091a1e0) Stream added, broadcasting: 5\nI0126 14:54:48.541156    3141 log.go:172] (0xc00096e0b0) Reply frame received for 5\nI0126 14:54:48.662509    3141 log.go:172] (0xc00096e0b0) Data frame received for 5\nI0126 14:54:48.662652    3141 log.go:172] (0xc00091a1e0) (5) Data frame handling\nI0126 14:54:48.662699    3141 log.go:172] (0xc00091a1e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0126 14:54:48.662764    3141 log.go:172] (0xc00096e0b0) Data frame received for 3\nI0126 14:54:48.662813    3141 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0126 14:54:48.662841    3141 log.go:172] (0xc0008fe000) (3) Data frame sent\nI0126 14:54:48.793957    3141 log.go:172] (0xc00096e0b0) Data frame received for 1\nI0126 14:54:48.794304    3141 log.go:172] (0xc00091a140) (1) Data frame handling\nI0126 14:54:48.794363    3141 log.go:172] (0xc00096e0b0) (0xc0008fe000) Stream removed, broadcasting: 3\nI0126 14:54:48.794480    3141 log.go:172] (0xc00096e0b0) (0xc00091a1e0) Stream removed, broadcasting: 5\nI0126 14:54:48.794517    3141 log.go:172] (0xc00091a140) (1) Data frame sent\nI0126 14:54:48.794526    3141 log.go:172] (0xc00096e0b0) (0xc00091a140) Stream removed, broadcasting: 1\nI0126 14:54:48.794537    3141 log.go:172] (0xc00096e0b0) Go away received\nI0126 14:54:48.795923    3141 log.go:172] (0xc00096e0b0) (0xc00091a140) Stream removed, broadcasting: 1\nI0126 14:54:48.795966    3141 log.go:172] (0xc00096e0b0) (0xc0008fe000) Stream removed, broadcasting: 3\nI0126 14:54:48.796041    3141 log.go:172] (0xc00096e0b0) (0xc00091a1e0) Stream removed, broadcasting: 5\n"
Jan 26 14:54:48.806: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 14:54:48.806: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 14:54:48.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 14:54:49.316: INFO: stderr: "I0126 14:54:48.998519    3162 log.go:172] (0xc0008c00b0) (0xc0008566e0) Create stream\nI0126 14:54:48.998734    3162 log.go:172] (0xc0008c00b0) (0xc0008566e0) Stream added, broadcasting: 1\nI0126 14:54:49.003512    3162 log.go:172] (0xc0008c00b0) Reply frame received for 1\nI0126 14:54:49.003561    3162 log.go:172] (0xc0008c00b0) (0xc000658280) Create stream\nI0126 14:54:49.003571    3162 log.go:172] (0xc0008c00b0) (0xc000658280) Stream added, broadcasting: 3\nI0126 14:54:49.004648    3162 log.go:172] (0xc0008c00b0) Reply frame received for 3\nI0126 14:54:49.004685    3162 log.go:172] (0xc0008c00b0) (0xc0002c2000) Create stream\nI0126 14:54:49.004696    3162 log.go:172] (0xc0008c00b0) (0xc0002c2000) Stream added, broadcasting: 5\nI0126 14:54:49.005745    3162 log.go:172] (0xc0008c00b0) Reply frame received for 5\nI0126 14:54:49.101696    3162 log.go:172] (0xc0008c00b0) Data frame received for 5\nI0126 14:54:49.101746    3162 log.go:172] (0xc0002c2000) (5) Data frame handling\nI0126 14:54:49.101766    3162 log.go:172] (0xc0002c2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0126 14:54:49.203136    3162 log.go:172] (0xc0008c00b0) Data frame received for 3\nI0126 14:54:49.203186    3162 log.go:172] (0xc000658280) (3) Data frame handling\nI0126 14:54:49.203212    3162 log.go:172] (0xc000658280) (3) Data frame sent\nI0126 14:54:49.307581    3162 log.go:172] (0xc0008c00b0) (0xc0002c2000) Stream removed, broadcasting: 5\nI0126 14:54:49.307767    3162 log.go:172] (0xc0008c00b0) Data frame received for 1\nI0126 14:54:49.307805    3162 log.go:172] (0xc0008c00b0) (0xc000658280) Stream removed, broadcasting: 3\nI0126 14:54:49.307875    3162 log.go:172] (0xc0008566e0) (1) Data frame handling\nI0126 14:54:49.307898    3162 log.go:172] (0xc0008566e0) (1) Data frame sent\nI0126 14:54:49.307913    3162 log.go:172] (0xc0008c00b0) (0xc0008566e0) Stream removed, broadcasting: 1\nI0126 14:54:49.307953    3162 log.go:172] (0xc0008c00b0) Go away received\nI0126 14:54:49.308810    3162 log.go:172] (0xc0008c00b0) (0xc0008566e0) Stream removed, broadcasting: 1\nI0126 14:54:49.308832    3162 log.go:172] (0xc0008c00b0) (0xc000658280) Stream removed, broadcasting: 3\nI0126 14:54:49.308840    3162 log.go:172] (0xc0008c00b0) (0xc0002c2000) Stream removed, broadcasting: 5\n"
Jan 26 14:54:49.316: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 14:54:49.316: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 14:54:49.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 26 14:54:49.918: INFO: stderr: "I0126 14:54:49.587032    3180 log.go:172] (0xc0009b22c0) (0xc0009a4640) Create stream\nI0126 14:54:49.587182    3180 log.go:172] (0xc0009b22c0) (0xc0009a4640) Stream added, broadcasting: 1\nI0126 14:54:49.600272    3180 log.go:172] (0xc0009b22c0) Reply frame received for 1\nI0126 14:54:49.600454    3180 log.go:172] (0xc0009b22c0) (0xc00086c000) Create stream\nI0126 14:54:49.600464    3180 log.go:172] (0xc0009b22c0) (0xc00086c000) Stream added, broadcasting: 3\nI0126 14:54:49.603139    3180 log.go:172] (0xc0009b22c0) Reply frame received for 3\nI0126 14:54:49.603179    3180 log.go:172] (0xc0009b22c0) (0xc000636320) Create stream\nI0126 14:54:49.603223    3180 log.go:172] (0xc0009b22c0) (0xc000636320) Stream added, broadcasting: 5\nI0126 14:54:49.607224    3180 log.go:172] (0xc0009b22c0) Reply frame received for 5\nI0126 14:54:49.716183    3180 log.go:172] (0xc0009b22c0) Data frame received for 5\nI0126 14:54:49.716280    3180 log.go:172] (0xc000636320) (5) Data frame handling\nI0126 14:54:49.716324    3180 log.go:172] (0xc000636320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0126 14:54:49.781627    3180 log.go:172] (0xc0009b22c0) Data frame received for 3\nI0126 14:54:49.781723    3180 log.go:172] (0xc00086c000) (3) Data frame handling\nI0126 14:54:49.781773    3180 log.go:172] (0xc00086c000) (3) Data frame sent\nI0126 14:54:49.905999    3180 log.go:172] (0xc0009b22c0) Data frame received for 1\nI0126 14:54:49.906161    3180 log.go:172] (0xc0009b22c0) (0xc00086c000) Stream removed, broadcasting: 3\nI0126 14:54:49.906301    3180 log.go:172] (0xc0009a4640) (1) Data frame handling\nI0126 14:54:49.906330    3180 log.go:172] (0xc0009a4640) (1) Data frame sent\nI0126 14:54:49.906337    3180 log.go:172] (0xc0009b22c0) (0xc0009a4640) Stream removed, broadcasting: 1\nI0126 14:54:49.906993    3180 log.go:172] (0xc0009b22c0) (0xc000636320) Stream removed, broadcasting: 5\nI0126 14:54:49.907060    3180 log.go:172] (0xc0009b22c0) Go away received\nI0126 14:54:49.907605    3180 log.go:172] (0xc0009b22c0) (0xc0009a4640) Stream removed, broadcasting: 1\nI0126 14:54:49.907617    3180 log.go:172] (0xc0009b22c0) (0xc00086c000) Stream removed, broadcasting: 3\nI0126 14:54:49.907623    3180 log.go:172] (0xc0009b22c0) (0xc000636320) Stream removed, broadcasting: 5\n"
Jan 26 14:54:49.918: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 26 14:54:49.918: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 26 14:54:49.918: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 14:54:49.924: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 26 14:54:59.942: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 14:54:59.942: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 14:54:59.942: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 26 14:54:59.977: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 26 14:54:59.977: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  }]
Jan 26 14:54:59.977: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:54:59.977: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:54:59.977: INFO: 
Jan 26 14:54:59.977: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 14:55:02.148: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 26 14:55:02.149: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  }]
Jan 26 14:55:02.149: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:02.149: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:02.149: INFO: 
Jan 26 14:55:02.149: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 14:55:03.157: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 26 14:55:03.157: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  }]
Jan 26 14:55:03.157: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:03.157: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:03.158: INFO: 
Jan 26 14:55:03.158: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 14:55:04.173: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 26 14:55:04.173: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  }]
Jan 26 14:55:04.173: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:04.173: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:04.173: INFO: 
Jan 26 14:55:04.173: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 14:55:05.188: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 26 14:55:05.188: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  }]
Jan 26 14:55:05.188: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:05.188: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:05.188: INFO: 
Jan 26 14:55:05.188: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 14:55:06.201: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 26 14:55:06.201: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  }]
Jan 26 14:55:06.201: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:06.201: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:06.201: INFO: 
Jan 26 14:55:06.201: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 14:55:07.210: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 26 14:55:07.210: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  }]
Jan 26 14:55:07.210: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:07.210: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:07.210: INFO: 
Jan 26 14:55:07.210: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 14:55:08.227: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 26 14:55:08.227: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  }]
Jan 26 14:55:08.227: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:08.227: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:08.227: INFO: 
Jan 26 14:55:08.227: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 26 14:55:09.245: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 26 14:55:09.245: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:14 +0000 UTC  }]
Jan 26 14:55:09.245: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:09.245: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-26 14:54:35 +0000 UTC  }]
Jan 26 14:55:09.245: INFO: 
Jan 26 14:55:09.245: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1341
Jan 26 14:55:10.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:55:10.518: INFO: rc: 1
Jan 26 14:55:10.519: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0031a7ad0 exit status 1   true [0xc00198e1d8 0xc00198e1f0 0xc00198e208] [0xc00198e1d8 0xc00198e1f0 0xc00198e208] [0xc00198e1e8 0xc00198e200] [0xba6c50 0xba6c50] 0xc002429c20 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Jan 26 14:55:20.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:55:20.710: INFO: rc: 1
Jan 26 14:55:20.711: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020f7020 exit status 1   true [0xc002aca260 0xc002aca2a8 0xc002aca2f0] [0xc002aca260 0xc002aca2a8 0xc002aca2f0] [0xc002aca290 0xc002aca2e0] [0xba6c50 0xba6c50] 0xc001397740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:55:30.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:55:30.911: INFO: rc: 1
Jan 26 14:55:30.912: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003244090 exit status 1   true [0xc001c1c000 0xc001c1c018 0xc001c1c030] [0xc001c1c000 0xc001c1c018 0xc001c1c030] [0xc001c1c010 0xc001c1c028] [0xba6c50 0xba6c50] 0xc002828360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:55:40.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:55:41.102: INFO: rc: 1
Jan 26 14:55:41.103: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0031a7b90 exit status 1   true [0xc00198e210 0xc00198e228 0xc00198e240] [0xc00198e210 0xc00198e228 0xc00198e240] [0xc00198e220 0xc00198e238] [0xba6c50 0xba6c50] 0xc00209e120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:55:51.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:55:51.302: INFO: rc: 1
Jan 26 14:55:51.302: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001fc3260 exit status 1   true [0xc000dff278 0xc000dff2b8 0xc000dff2f8] [0xc000dff278 0xc000dff2b8 0xc000dff2f8] [0xc000dff2b0 0xc000dff2d8] [0xba6c50 0xba6c50] 0xc0028244e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:56:01.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:56:01.612: INFO: rc: 1
Jan 26 14:56:01.612: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020f70e0 exit status 1   true [0xc002aca308 0xc002aca338 0xc002aca380] [0xc002aca308 0xc002aca338 0xc002aca380] [0xc002aca318 0xc002aca360] [0xba6c50 0xba6c50] 0xc00232e960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:56:11.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:56:11.825: INFO: rc: 1
Jan 26 14:56:11.826: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0020f71a0 exit status 1   true [0xc002aca398 0xc002aca3b0 0xc002aca3c8] [0xc002aca398 0xc002aca3b0 0xc002aca3c8] [0xc002aca3a8 0xc002aca3c0] [0xba6c50 0xba6c50] 0xc00232f440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:56:21.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:56:22.013: INFO: rc: 1
Jan 26 14:56:22.013: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001fc3350 exit status 1   true [0xc000dff318 0xc000dff3f0 0xc000dff4d0] [0xc000dff318 0xc000dff3f0 0xc000dff4d0] [0xc000dff398 0xc000dff468] [0xba6c50 0xba6c50] 0xc0028248a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:56:32.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:56:32.260: INFO: rc: 1
Jan 26 14:56:32.260: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0032441e0 exit status 1   true [0xc001c1c038 0xc001c1c050 0xc001c1c068] [0xc001c1c038 0xc001c1c050 0xc001c1c068] [0xc001c1c048 0xc001c1c060] [0xba6c50 0xba6c50] 0xc002828780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:56:42.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:56:42.706: INFO: rc: 1
Jan 26 14:56:42.706: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0031a7c80 exit status 1   true [0xc00198e250 0xc00198e268 0xc00198e280] [0xc00198e250 0xc00198e268 0xc00198e280] [0xc00198e260 0xc00198e278] [0xba6c50 0xba6c50] 0xc00209e4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:56:52.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:56:52.841: INFO: rc: 1
Jan 26 14:56:52.841: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c0e090 exit status 1   true [0xc0001a7dc8 0xc000011f38 0xc000dfe118] [0xc0001a7dc8 0xc000011f38 0xc000dfe118] [0xc000011d68 0xc000dfe0e8] [0xba6c50 0xba6c50] 0xc002428480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:57:02.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:57:03.031: INFO: rc: 1
Jan 26 14:57:03.031: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c0e150 exit status 1   true [0xc000dfe150 0xc000dfe228 0xc000dfe480] [0xc000dfe150 0xc000dfe228 0xc000dfe480] [0xc000dfe190 0xc000dfe450] [0xba6c50 0xba6c50] 0xc002428de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:57:13.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:57:13.198: INFO: rc: 1
Jan 26 14:57:13.198: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002bf60c0 exit status 1   true [0xc00198e000 0xc00198e018 0xc00198e030] [0xc00198e000 0xc00198e018 0xc00198e030] [0xc00198e010 0xc00198e028] [0xba6c50 0xba6c50] 0xc00216d500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:57:23.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:57:23.408: INFO: rc: 1
Jan 26 14:57:23.409: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c0e240 exit status 1   true [0xc000dfe4c0 0xc000dfe550 0xc000dfe5f0] [0xc000dfe4c0 0xc000dfe550 0xc000dfe5f0] [0xc000dfe530 0xc000dfe5b8] [0xba6c50 0xba6c50] 0xc002429380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:57:33.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:57:33.658: INFO: rc: 1
Jan 26 14:57:33.659: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024d40c0 exit status 1   true [0xc001c1c000 0xc001c1c018 0xc001c1c030] [0xc001c1c000 0xc001c1c018 0xc001c1c030] [0xc001c1c010 0xc001c1c028] [0xba6c50 0xba6c50] 0xc002bde240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:57:43.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:57:43.836: INFO: rc: 1
Jan 26 14:57:43.836: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024d41b0 exit status 1   true [0xc001c1c038 0xc001c1c050 0xc001c1c068] [0xc001c1c038 0xc001c1c050 0xc001c1c068] [0xc001c1c048 0xc001c1c060] [0xba6c50 0xba6c50] 0xc002bde540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:57:53.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:57:54.017: INFO: rc: 1
Jan 26 14:57:54.018: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002bf63f0 exit status 1   true [0xc00198e038 0xc00198e050 0xc00198e068] [0xc00198e038 0xc00198e050 0xc00198e068] [0xc00198e048 0xc00198e060] [0xba6c50 0xba6c50] 0xc001fa09c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:58:04.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:58:04.156: INFO: rc: 1
Jan 26 14:58:04.157: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c0e360 exit status 1   true [0xc000dfe678 0xc000dfe6e0 0xc000dfe7b0] [0xc000dfe678 0xc000dfe6e0 0xc000dfe7b0] [0xc000dfe6c8 0xc000dfe770] [0xba6c50 0xba6c50] 0xc002429c20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:58:14.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:58:14.326: INFO: rc: 1
Jan 26 14:58:14.326: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002bf64e0 exit status 1   true [0xc00198e070 0xc00198e088 0xc00198e0a0] [0xc00198e070 0xc00198e088 0xc00198e0a0] [0xc00198e080 0xc00198e098] [0xba6c50 0xba6c50] 0xc002df2ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:58:24.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:58:24.489: INFO: rc: 1
Jan 26 14:58:24.490: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002bf6600 exit status 1   true [0xc00198e0a8 0xc00198e0c8 0xc00198e0e0] [0xc00198e0a8 0xc00198e0c8 0xc00198e0e0] [0xc00198e0c0 0xc00198e0d8] [0xba6c50 0xba6c50] 0xc0030e25a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:58:34.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:58:34.652: INFO: rc: 1
Jan 26 14:58:34.653: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c0e450 exit status 1   true [0xc000dfe7f8 0xc000dfe908 0xc000dfea58] [0xc000dfe7f8 0xc000dfe908 0xc000dfea58] [0xc000dfe870 0xc000dfe9e8] [0xba6c50 0xba6c50] 0xc001f1c6c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:58:44.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:58:44.839: INFO: rc: 1
Jan 26 14:58:44.839: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002bf6090 exit status 1   true [0xc000011d68 0xc00198e000 0xc00198e018] [0xc000011d68 0xc00198e000 0xc00198e018] [0xc0001a7dc8 0xc00198e010] [0xba6c50 0xba6c50] 0xc0030e21e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:58:54.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:58:54.997: INFO: rc: 1
Jan 26 14:58:54.997: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c0e0c0 exit status 1   true [0xc000dfe080 0xc000dfe150 0xc000dfe228] [0xc000dfe080 0xc000dfe150 0xc000dfe228] [0xc000dfe118 0xc000dfe190] [0xba6c50 0xba6c50] 0xc001fa0120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:59:04.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:59:05.170: INFO: rc: 1
Jan 26 14:59:05.170: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002bf63c0 exit status 1   true [0xc00198e020 0xc00198e038 0xc00198e050] [0xc00198e020 0xc00198e038 0xc00198e050] [0xc00198e030 0xc00198e048] [0xba6c50 0xba6c50] 0xc00216d380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:59:15.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:59:15.374: INFO: rc: 1
Jan 26 14:59:15.374: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c0e210 exit status 1   true [0xc000dfe298 0xc000dfe4c0 0xc000dfe550] [0xc000dfe298 0xc000dfe4c0 0xc000dfe550] [0xc000dfe480 0xc000dfe530] [0xba6c50 0xba6c50] 0xc0024282a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:59:25.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:59:25.603: INFO: rc: 1
Jan 26 14:59:25.603: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002bf6540 exit status 1   true [0xc00198e058 0xc00198e070 0xc00198e088] [0xc00198e058 0xc00198e070 0xc00198e088] [0xc00198e068 0xc00198e080] [0xba6c50 0xba6c50] 0xc001397740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:59:35.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:59:35.787: INFO: rc: 1
Jan 26 14:59:35.787: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002bf6690 exit status 1   true [0xc00198e090 0xc00198e0a8 0xc00198e0c8] [0xc00198e090 0xc00198e0a8 0xc00198e0c8] [0xc00198e0a0 0xc00198e0c0] [0xba6c50 0xba6c50] 0xc002079860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:59:45.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:59:46.007: INFO: rc: 1
Jan 26 14:59:46.008: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002bf6750 exit status 1   true [0xc00198e0d0 0xc00198e0e8 0xc00198e100] [0xc00198e0d0 0xc00198e0e8 0xc00198e100] [0xc00198e0e0 0xc00198e0f8] [0xba6c50 0xba6c50] 0xc002bde120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 14:59:56.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 14:59:56.171: INFO: rc: 1
Jan 26 14:59:56.171: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002c0e390 exit status 1   true [0xc000dfe588 0xc000dfe678 0xc000dfe6e0] [0xc000dfe588 0xc000dfe678 0xc000dfe6e0] [0xc000dfe5f0 0xc000dfe6c8] [0xba6c50 0xba6c50] 0xc002428ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 15:00:06.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 15:00:06.367: INFO: rc: 1
Jan 26 15:00:06.368: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028a2090 exit status 1   true [0xc001c1c000 0xc001c1c018 0xc001c1c030] [0xc001c1c000 0xc001c1c018 0xc001c1c030] [0xc001c1c010 0xc001c1c028] [0xba6c50 0xba6c50] 0xc002234ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 26 15:00:16.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1341 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 26 15:00:16.583: INFO: rc: 1
Jan 26 15:00:16.584: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan 26 15:00:16.584: INFO: Scaling statefulset ss to 0
Jan 26 15:00:16.619: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 26 15:00:16.626: INFO: Deleting all statefulset in ns statefulset-1341
Jan 26 15:00:16.630: INFO: Scaling statefulset ss to 0
Jan 26 15:00:16.640: INFO: Waiting for statefulset status.replicas updated to 0
Jan 26 15:00:16.642: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 15:00:16.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1341" for this suite.
Jan 26 15:00:22.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 15:00:22.800: INFO: namespace statefulset-1341 deletion completed in 6.133416164s

• [SLOW TEST:368.214 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 15:00:22.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 26 15:00:23.026: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 15:00:24.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2774" for this suite.
Jan 26 15:00:30.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 15:00:30.357: INFO: namespace custom-resource-definition-2774 deletion completed in 6.226820891s

• [SLOW TEST:7.556 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 15:00:30.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0126 15:01:10.837750       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 26 15:01:10.837: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 15:01:10.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1150" for this suite.
Jan 26 15:01:30.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 15:01:31.039: INFO: namespace gc-1150 deletion completed in 20.181480404s

• [SLOW TEST:60.680 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 15:01:31.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2271.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2271.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2271.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2271.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2271.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2271.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 26 15:01:43.253: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2271/dns-test-1e759eb7-86be-4949-b113-2fd7bffb2727: the server could not find the requested resource (get pods dns-test-1e759eb7-86be-4949-b113-2fd7bffb2727)
Jan 26 15:01:43.257: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2271/dns-test-1e759eb7-86be-4949-b113-2fd7bffb2727: the server could not find the requested resource (get pods dns-test-1e759eb7-86be-4949-b113-2fd7bffb2727)
Jan 26 15:01:43.260: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-2271.svc.cluster.local from pod dns-2271/dns-test-1e759eb7-86be-4949-b113-2fd7bffb2727: the server could not find the requested resource (get pods dns-test-1e759eb7-86be-4949-b113-2fd7bffb2727)
Jan 26 15:01:43.270: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-2271/dns-test-1e759eb7-86be-4949-b113-2fd7bffb2727: the server could not find the requested resource (get pods dns-test-1e759eb7-86be-4949-b113-2fd7bffb2727)
Jan 26 15:01:43.278: INFO: Unable to read jessie_udp@PodARecord from pod dns-2271/dns-test-1e759eb7-86be-4949-b113-2fd7bffb2727: the server could not find the requested resource (get pods dns-test-1e759eb7-86be-4949-b113-2fd7bffb2727)
Jan 26 15:01:43.282: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2271/dns-test-1e759eb7-86be-4949-b113-2fd7bffb2727: the server could not find the requested resource (get pods dns-test-1e759eb7-86be-4949-b113-2fd7bffb2727)
Jan 26 15:01:43.282: INFO: Lookups using dns-2271/dns-test-1e759eb7-86be-4949-b113-2fd7bffb2727 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-2271.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 26 15:01:48.393: INFO: DNS probes using dns-2271/dns-test-1e759eb7-86be-4949-b113-2fd7bffb2727 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 15:01:48.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2271" for this suite.
Jan 26 15:01:54.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 15:01:54.691: INFO: namespace dns-2271 deletion completed in 6.160869123s

• [SLOW TEST:23.653 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 15:01:54.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan 26 15:01:55.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1668'
Jan 26 15:01:57.543: INFO: stderr: ""
Jan 26 15:01:57.543: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 26 15:01:58.559: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:01:58.560: INFO: Found 0 / 1
Jan 26 15:01:59.554: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:01:59.554: INFO: Found 0 / 1
Jan 26 15:02:00.555: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:02:00.556: INFO: Found 0 / 1
Jan 26 15:02:01.551: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:02:01.551: INFO: Found 0 / 1
Jan 26 15:02:02.555: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:02:02.556: INFO: Found 0 / 1
Jan 26 15:02:03.554: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:02:03.554: INFO: Found 0 / 1
Jan 26 15:02:04.568: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:02:04.568: INFO: Found 0 / 1
Jan 26 15:02:05.562: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:02:05.562: INFO: Found 1 / 1
Jan 26 15:02:05.562: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 26 15:02:05.569: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:02:05.569: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 26 15:02:05.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-t446j --namespace=kubectl-1668 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 26 15:02:05.745: INFO: stderr: ""
Jan 26 15:02:05.745: INFO: stdout: "pod/redis-master-t446j patched\n"
STEP: checking annotations
Jan 26 15:02:05.751: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:02:05.751: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 15:02:05.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1668" for this suite.
Jan 26 15:02:27.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 15:02:27.993: INFO: namespace kubectl-1668 deletion completed in 22.236777477s

• [SLOW TEST:33.300 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 15:02:27.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Jan 26 15:02:28.068: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix546852175/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 15:02:28.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3808" for this suite.
Jan 26 15:02:34.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 15:02:34.332: INFO: namespace kubectl-3808 deletion completed in 6.157678548s

• [SLOW TEST:6.338 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 15:02:34.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8971.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8971.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8971.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8971.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 26 15:02:46.554: INFO: File wheezy_udp@dns-test-service-3.dns-8971.svc.cluster.local from pod  dns-8971/dns-test-539723d8-170f-4629-b6ea-d154e16b9305 contains '' instead of 'foo.example.com.'
Jan 26 15:02:46.561: INFO: File jessie_udp@dns-test-service-3.dns-8971.svc.cluster.local from pod  dns-8971/dns-test-539723d8-170f-4629-b6ea-d154e16b9305 contains '' instead of 'foo.example.com.'
Jan 26 15:02:46.561: INFO: Lookups using dns-8971/dns-test-539723d8-170f-4629-b6ea-d154e16b9305 failed for: [wheezy_udp@dns-test-service-3.dns-8971.svc.cluster.local jessie_udp@dns-test-service-3.dns-8971.svc.cluster.local]

Jan 26 15:02:51.587: INFO: DNS probes using dns-test-539723d8-170f-4629-b6ea-d154e16b9305 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8971.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8971.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8971.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8971.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 26 15:03:05.849: INFO: File wheezy_udp@dns-test-service-3.dns-8971.svc.cluster.local from pod  dns-8971/dns-test-1a0bb47b-af41-401a-a89b-83dad90bea86 contains '' instead of 'bar.example.com.'
Jan 26 15:03:05.856: INFO: File jessie_udp@dns-test-service-3.dns-8971.svc.cluster.local from pod  dns-8971/dns-test-1a0bb47b-af41-401a-a89b-83dad90bea86 contains '' instead of 'bar.example.com.'
Jan 26 15:03:05.856: INFO: Lookups using dns-8971/dns-test-1a0bb47b-af41-401a-a89b-83dad90bea86 failed for: [wheezy_udp@dns-test-service-3.dns-8971.svc.cluster.local jessie_udp@dns-test-service-3.dns-8971.svc.cluster.local]

Jan 26 15:03:10.875: INFO: File wheezy_udp@dns-test-service-3.dns-8971.svc.cluster.local from pod  dns-8971/dns-test-1a0bb47b-af41-401a-a89b-83dad90bea86 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 26 15:03:10.883: INFO: File jessie_udp@dns-test-service-3.dns-8971.svc.cluster.local from pod  dns-8971/dns-test-1a0bb47b-af41-401a-a89b-83dad90bea86 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 26 15:03:10.883: INFO: Lookups using dns-8971/dns-test-1a0bb47b-af41-401a-a89b-83dad90bea86 failed for: [wheezy_udp@dns-test-service-3.dns-8971.svc.cluster.local jessie_udp@dns-test-service-3.dns-8971.svc.cluster.local]

Jan 26 15:03:15.872: INFO: File wheezy_udp@dns-test-service-3.dns-8971.svc.cluster.local from pod  dns-8971/dns-test-1a0bb47b-af41-401a-a89b-83dad90bea86 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 26 15:03:15.878: INFO: File jessie_udp@dns-test-service-3.dns-8971.svc.cluster.local from pod  dns-8971/dns-test-1a0bb47b-af41-401a-a89b-83dad90bea86 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 26 15:03:15.878: INFO: Lookups using dns-8971/dns-test-1a0bb47b-af41-401a-a89b-83dad90bea86 failed for: [wheezy_udp@dns-test-service-3.dns-8971.svc.cluster.local jessie_udp@dns-test-service-3.dns-8971.svc.cluster.local]

Jan 26 15:03:20.899: INFO: DNS probes using dns-test-1a0bb47b-af41-401a-a89b-83dad90bea86 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8971.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8971.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8971.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8971.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 26 15:03:35.321: INFO: File wheezy_udp@dns-test-service-3.dns-8971.svc.cluster.local from pod  dns-8971/dns-test-caff20f1-d7ba-4e7f-9e72-ee4903baa3c8 contains '' instead of '10.103.102.165'
Jan 26 15:03:35.326: INFO: File jessie_udp@dns-test-service-3.dns-8971.svc.cluster.local from pod  dns-8971/dns-test-caff20f1-d7ba-4e7f-9e72-ee4903baa3c8 contains '' instead of '10.103.102.165'
Jan 26 15:03:35.326: INFO: Lookups using dns-8971/dns-test-caff20f1-d7ba-4e7f-9e72-ee4903baa3c8 failed for: [wheezy_udp@dns-test-service-3.dns-8971.svc.cluster.local jessie_udp@dns-test-service-3.dns-8971.svc.cluster.local]

Jan 26 15:03:40.354: INFO: File wheezy_udp@dns-test-service-3.dns-8971.svc.cluster.local from pod  dns-8971/dns-test-caff20f1-d7ba-4e7f-9e72-ee4903baa3c8 contains '' instead of '10.103.102.165'
Jan 26 15:03:40.372: INFO: File jessie_udp@dns-test-service-3.dns-8971.svc.cluster.local from pod  dns-8971/dns-test-caff20f1-d7ba-4e7f-9e72-ee4903baa3c8 contains '' instead of '10.103.102.165'
Jan 26 15:03:40.372: INFO: Lookups using dns-8971/dns-test-caff20f1-d7ba-4e7f-9e72-ee4903baa3c8 failed for: [wheezy_udp@dns-test-service-3.dns-8971.svc.cluster.local jessie_udp@dns-test-service-3.dns-8971.svc.cluster.local]

Jan 26 15:03:45.344: INFO: DNS probes using dns-test-caff20f1-d7ba-4e7f-9e72-ee4903baa3c8 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 15:03:45.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8971" for this suite.
Jan 26 15:03:53.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 15:03:53.892: INFO: namespace dns-8971 deletion completed in 8.222997674s

• [SLOW TEST:79.558 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 15:03:53.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 26 15:03:54.028: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8da551c-ad25-452a-b5e1-2421a7d83ad4" in namespace "projected-4947" to be "success or failure"
Jan 26 15:03:54.039: INFO: Pod "downwardapi-volume-c8da551c-ad25-452a-b5e1-2421a7d83ad4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.603838ms
Jan 26 15:03:56.052: INFO: Pod "downwardapi-volume-c8da551c-ad25-452a-b5e1-2421a7d83ad4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023331072s
Jan 26 15:03:58.059: INFO: Pod "downwardapi-volume-c8da551c-ad25-452a-b5e1-2421a7d83ad4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030005553s
Jan 26 15:04:00.065: INFO: Pod "downwardapi-volume-c8da551c-ad25-452a-b5e1-2421a7d83ad4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03656057s
Jan 26 15:04:02.074: INFO: Pod "downwardapi-volume-c8da551c-ad25-452a-b5e1-2421a7d83ad4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045352265s
STEP: Saw pod success
Jan 26 15:04:02.074: INFO: Pod "downwardapi-volume-c8da551c-ad25-452a-b5e1-2421a7d83ad4" satisfied condition "success or failure"
Jan 26 15:04:02.077: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c8da551c-ad25-452a-b5e1-2421a7d83ad4 container client-container: 
STEP: delete the pod
Jan 26 15:04:02.162: INFO: Waiting for pod downwardapi-volume-c8da551c-ad25-452a-b5e1-2421a7d83ad4 to disappear
Jan 26 15:04:02.179: INFO: Pod downwardapi-volume-c8da551c-ad25-452a-b5e1-2421a7d83ad4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 15:04:02.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4947" for this suite.
Jan 26 15:04:08.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 15:04:08.417: INFO: namespace projected-4947 deletion completed in 6.230571568s

• [SLOW TEST:14.525 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 15:04:08.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan 26 15:04:08.514: INFO: namespace kubectl-1573
Jan 26 15:04:08.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1573'
Jan 26 15:04:09.078: INFO: stderr: ""
Jan 26 15:04:09.079: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 26 15:04:10.087: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:04:10.088: INFO: Found 0 / 1
Jan 26 15:04:11.090: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:04:11.090: INFO: Found 0 / 1
Jan 26 15:04:12.109: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:04:12.109: INFO: Found 0 / 1
Jan 26 15:04:13.093: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:04:13.093: INFO: Found 0 / 1
Jan 26 15:04:14.096: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:04:14.097: INFO: Found 0 / 1
Jan 26 15:04:15.330: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:04:15.330: INFO: Found 0 / 1
Jan 26 15:04:16.090: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:04:16.090: INFO: Found 1 / 1
Jan 26 15:04:16.090: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 26 15:04:16.095: INFO: Selector matched 1 pods for map[app:redis]
Jan 26 15:04:16.095: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 26 15:04:16.095: INFO: wait on redis-master startup in kubectl-1573 
Jan 26 15:04:16.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zkjcs redis-master --namespace=kubectl-1573'
Jan 26 15:04:16.279: INFO: stderr: ""
Jan 26 15:04:16.279: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 26 Jan 15:04:14.817 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 26 Jan 15:04:14.817 # Server started, Redis version 3.2.12\n1:M 26 Jan 15:04:14.817 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 26 Jan 15:04:14.817 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 26 15:04:16.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1573'
Jan 26 15:04:16.474: INFO: stderr: ""
Jan 26 15:04:16.474: INFO: stdout: "service/rm2 exposed\n"
Jan 26 15:04:16.525: INFO: Service rm2 in namespace kubectl-1573 found.
STEP: exposing service
Jan 26 15:04:18.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1573'
Jan 26 15:04:18.834: INFO: stderr: ""
Jan 26 15:04:18.835: INFO: stdout: "service/rm3 exposed\n"
Jan 26 15:04:18.959: INFO: Service rm3 in namespace kubectl-1573 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 15:04:21.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1573" for this suite.
Jan 26 15:04:45.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 15:04:45.190: INFO: namespace kubectl-1573 deletion completed in 24.172019412s

• [SLOW TEST:36.772 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 15:04:45.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-6818878b-212c-4b2b-a5d7-6fcfbd5be898
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 15:04:57.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2788" for this suite.
Jan 26 15:05:19.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 15:05:19.604: INFO: namespace configmap-2788 deletion completed in 22.11376423s

• [SLOW TEST:34.414 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 15:05:19.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan 26 15:05:19.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4821'
Jan 26 15:05:20.253: INFO: stderr: ""
Jan 26 15:05:20.253: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 26 15:05:20.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4821'
Jan 26 15:05:20.495: INFO: stderr: ""
Jan 26 15:05:20.495: INFO: stdout: "update-demo-nautilus-56swv update-demo-nautilus-dqr2l "
Jan 26 15:05:20.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-56swv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4821'
Jan 26 15:05:20.691: INFO: stderr: ""
Jan 26 15:05:20.691: INFO: stdout: ""
Jan 26 15:05:20.691: INFO: update-demo-nautilus-56swv is created but not running
Jan 26 15:05:25.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4821'
Jan 26 15:05:27.169: INFO: stderr: ""
Jan 26 15:05:27.169: INFO: stdout: "update-demo-nautilus-56swv update-demo-nautilus-dqr2l "
Jan 26 15:05:27.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-56swv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4821'
Jan 26 15:05:27.628: INFO: stderr: ""
Jan 26 15:05:27.628: INFO: stdout: ""
Jan 26 15:05:27.628: INFO: update-demo-nautilus-56swv is created but not running
Jan 26 15:05:32.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4821'
Jan 26 15:05:32.779: INFO: stderr: ""
Jan 26 15:05:32.779: INFO: stdout: "update-demo-nautilus-56swv update-demo-nautilus-dqr2l "
Jan 26 15:05:32.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-56swv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4821'
Jan 26 15:05:32.907: INFO: stderr: ""
Jan 26 15:05:32.908: INFO: stdout: "true"
Jan 26 15:05:32.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-56swv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4821'
Jan 26 15:05:33.048: INFO: stderr: ""
Jan 26 15:05:33.049: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 15:05:33.049: INFO: validating pod update-demo-nautilus-56swv
Jan 26 15:05:33.062: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 15:05:33.062: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 15:05:33.062: INFO: update-demo-nautilus-56swv is verified up and running
Jan 26 15:05:33.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dqr2l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4821'
Jan 26 15:05:33.198: INFO: stderr: ""
Jan 26 15:05:33.198: INFO: stdout: "true"
Jan 26 15:05:33.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dqr2l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4821'
Jan 26 15:05:33.296: INFO: stderr: ""
Jan 26 15:05:33.296: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 26 15:05:33.296: INFO: validating pod update-demo-nautilus-dqr2l
Jan 26 15:05:33.305: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 26 15:05:33.305: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 26 15:05:33.305: INFO: update-demo-nautilus-dqr2l is verified up and running
STEP: using delete to clean up resources
Jan 26 15:05:33.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4821'
Jan 26 15:05:33.458: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 26 15:05:33.458: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 26 15:05:33.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4821'
Jan 26 15:05:33.566: INFO: stderr: "No resources found.\n"
Jan 26 15:05:33.566: INFO: stdout: ""
Jan 26 15:05:33.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4821 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 26 15:05:33.785: INFO: stderr: ""
Jan 26 15:05:33.785: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 15:05:33.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4821" for this suite.
Jan 26 15:05:55.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 15:05:56.101: INFO: namespace kubectl-4821 deletion completed in 22.29019577s

• [SLOW TEST:36.497 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 15:05:56.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-v6fm
STEP: Creating a pod to test atomic-volume-subpath
Jan 26 15:05:56.276: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-v6fm" in namespace "subpath-2839" to be "success or failure"
Jan 26 15:05:56.286: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Pending", Reason="", readiness=false. Elapsed: 9.937134ms
Jan 26 15:05:58.295: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019229559s
Jan 26 15:06:00.305: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028473077s
Jan 26 15:06:02.329: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052426258s
Jan 26 15:06:04.338: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Running", Reason="", readiness=true. Elapsed: 8.061608608s
Jan 26 15:06:06.350: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Running", Reason="", readiness=true. Elapsed: 10.074216101s
Jan 26 15:06:08.361: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Running", Reason="", readiness=true. Elapsed: 12.085265986s
Jan 26 15:06:10.370: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Running", Reason="", readiness=true. Elapsed: 14.093544668s
Jan 26 15:06:12.378: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Running", Reason="", readiness=true. Elapsed: 16.101568325s
Jan 26 15:06:14.385: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Running", Reason="", readiness=true. Elapsed: 18.109158424s
Jan 26 15:06:16.403: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Running", Reason="", readiness=true. Elapsed: 20.12637354s
Jan 26 15:06:18.411: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Running", Reason="", readiness=true. Elapsed: 22.134552701s
Jan 26 15:06:20.420: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Running", Reason="", readiness=true. Elapsed: 24.143752034s
Jan 26 15:06:22.430: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Running", Reason="", readiness=true. Elapsed: 26.153335032s
Jan 26 15:06:24.439: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Running", Reason="", readiness=true. Elapsed: 28.162978356s
Jan 26 15:06:26.450: INFO: Pod "pod-subpath-test-downwardapi-v6fm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.173679615s
STEP: Saw pod success
Jan 26 15:06:26.450: INFO: Pod "pod-subpath-test-downwardapi-v6fm" satisfied condition "success or failure"
Jan 26 15:06:26.456: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-v6fm container test-container-subpath-downwardapi-v6fm: 
STEP: delete the pod
Jan 26 15:06:26.518: INFO: Waiting for pod pod-subpath-test-downwardapi-v6fm to disappear
Jan 26 15:06:26.566: INFO: Pod pod-subpath-test-downwardapi-v6fm no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-v6fm
Jan 26 15:06:26.567: INFO: Deleting pod "pod-subpath-test-downwardapi-v6fm" in namespace "subpath-2839"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 15:06:26.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2839" for this suite.
Jan 26 15:06:32.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 15:06:32.762: INFO: namespace subpath-2839 deletion completed in 6.182967551s

• [SLOW TEST:36.661 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 15:06:32.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 26 15:06:49.033: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 15:06:49.044: INFO: Pod pod-with-poststart-http-hook still exists
Jan 26 15:06:51.044: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 15:06:51.051: INFO: Pod pod-with-poststart-http-hook still exists
Jan 26 15:06:53.044: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 15:06:53.054: INFO: Pod pod-with-poststart-http-hook still exists
Jan 26 15:06:55.045: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 15:06:55.055: INFO: Pod pod-with-poststart-http-hook still exists
Jan 26 15:06:57.045: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 15:06:57.070: INFO: Pod pod-with-poststart-http-hook still exists
Jan 26 15:06:59.044: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 15:06:59.057: INFO: Pod pod-with-poststart-http-hook still exists
Jan 26 15:07:01.045: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 15:07:01.055: INFO: Pod pod-with-poststart-http-hook still exists
Jan 26 15:07:03.045: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 15:07:03.054: INFO: Pod pod-with-poststart-http-hook still exists
Jan 26 15:07:05.044: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 15:07:05.051: INFO: Pod pod-with-poststart-http-hook still exists
Jan 26 15:07:07.044: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 26 15:07:07.078: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 15:07:07.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6310" for this suite.
Jan 26 15:07:29.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 15:07:29.283: INFO: namespace container-lifecycle-hook-6310 deletion completed in 22.197494117s

• [SLOW TEST:56.520 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 15:07:29.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan 26 15:07:29.437: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4003" to be "success or failure"
Jan 26 15:07:29.486: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 48.571835ms
Jan 26 15:07:31.494: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056916604s
Jan 26 15:07:33.502: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065080603s
Jan 26 15:07:35.518: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081458386s
Jan 26 15:07:37.533: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09604657s
Jan 26 15:07:39.567: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.129586503s
Jan 26 15:07:41.615: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.17838004s
STEP: Saw pod success
Jan 26 15:07:41.615: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 26 15:07:41.620: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 26 15:07:41.720: INFO: Waiting for pod pod-host-path-test to disappear
Jan 26 15:07:41.805: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 15:07:41.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-4003" for this suite.
Jan 26 15:07:47.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 15:07:48.019: INFO: namespace hostpath-4003 deletion completed in 6.20550046s

• [SLOW TEST:18.735 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 26 15:07:48.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-6f8ee2a8-e696-4a40-a3b8-84af7973a2f2 in namespace container-probe-489
Jan 26 15:07:58.148: INFO: Started pod busybox-6f8ee2a8-e696-4a40-a3b8-84af7973a2f2 in namespace container-probe-489
STEP: checking the pod's current state and verifying that restartCount is present
Jan 26 15:07:58.152: INFO: Initial restart count of pod busybox-6f8ee2a8-e696-4a40-a3b8-84af7973a2f2 is 0
Jan 26 15:08:52.682: INFO: Restart count of pod container-probe-489/busybox-6f8ee2a8-e696-4a40-a3b8-84af7973a2f2 is now 1 (54.530322523s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 26 15:08:52.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-489" for this suite.
Jan 26 15:08:58.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 26 15:08:59.016: INFO: namespace container-probe-489 deletion completed in 6.279164611s

• [SLOW TEST:70.997 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSJan 26 15:08:59.017: INFO: Running AfterSuite actions on all nodes
Jan 26 15:08:59.017: INFO: Running AfterSuite actions on node 1
Jan 26 15:08:59.017: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 7969.307 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS