I0130 12:56:23.420449 8 e2e.go:243] Starting e2e run "2ec793b6-f568-4dd1-b59d-699706adfadf" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580388982 - Will randomize all specs Will run 215 of 4412 specs Jan 30 12:56:23.870: INFO: >>> kubeConfig: /root/.kube/config Jan 30 12:56:23.879: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 30 12:56:23.988: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 30 12:56:24.056: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 30 12:56:24.056: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 30 12:56:24.056: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 30 12:56:24.100: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 30 12:56:24.100: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 30 12:56:24.100: INFO: e2e test version: v1.15.7 Jan 30 12:56:24.104: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 12:56:24.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Jan 30 12:56:24.185: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 30 12:56:24.219: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87cf3346-4828-4080-921c-e241d954a22f" in namespace "downward-api-3283" to be "success or failure" Jan 30 12:56:24.228: INFO: Pod "downwardapi-volume-87cf3346-4828-4080-921c-e241d954a22f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.920254ms Jan 30 12:56:26.238: INFO: Pod "downwardapi-volume-87cf3346-4828-4080-921c-e241d954a22f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01853146s Jan 30 12:56:28.254: INFO: Pod "downwardapi-volume-87cf3346-4828-4080-921c-e241d954a22f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034268612s Jan 30 12:56:30.261: INFO: Pod "downwardapi-volume-87cf3346-4828-4080-921c-e241d954a22f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041962196s Jan 30 12:56:32.270: INFO: Pod "downwardapi-volume-87cf3346-4828-4080-921c-e241d954a22f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050311169s Jan 30 12:56:34.283: INFO: Pod "downwardapi-volume-87cf3346-4828-4080-921c-e241d954a22f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.0637473s STEP: Saw pod success Jan 30 12:56:34.283: INFO: Pod "downwardapi-volume-87cf3346-4828-4080-921c-e241d954a22f" satisfied condition "success or failure" Jan 30 12:56:34.289: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-87cf3346-4828-4080-921c-e241d954a22f container client-container: STEP: delete the pod Jan 30 12:56:34.371: INFO: Waiting for pod downwardapi-volume-87cf3346-4828-4080-921c-e241d954a22f to disappear Jan 30 12:56:34.376: INFO: Pod downwardapi-volume-87cf3346-4828-4080-921c-e241d954a22f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 12:56:34.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3283" for this suite. Jan 30 12:56:40.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 12:56:40.738: INFO: namespace downward-api-3283 deletion completed in 6.354723013s • [SLOW TEST:16.634 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 12:56:40.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-2e505817-cfe1-4b89-ae0a-2b933b4460c8 STEP: Creating a pod to test consume configMaps Jan 30 12:56:40.880: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fd4e8aa4-4a4d-49ea-90d9-3b15ffbe1064" in namespace "projected-7597" to be "success or failure" Jan 30 12:56:40.888: INFO: Pod "pod-projected-configmaps-fd4e8aa4-4a4d-49ea-90d9-3b15ffbe1064": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059238ms Jan 30 12:56:42.899: INFO: Pod "pod-projected-configmaps-fd4e8aa4-4a4d-49ea-90d9-3b15ffbe1064": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018633143s Jan 30 12:56:44.921: INFO: Pod "pod-projected-configmaps-fd4e8aa4-4a4d-49ea-90d9-3b15ffbe1064": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040906231s Jan 30 12:56:46.937: INFO: Pod "pod-projected-configmaps-fd4e8aa4-4a4d-49ea-90d9-3b15ffbe1064": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057316398s Jan 30 12:56:48.949: INFO: Pod "pod-projected-configmaps-fd4e8aa4-4a4d-49ea-90d9-3b15ffbe1064": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069124258s Jan 30 12:56:50.959: INFO: Pod "pod-projected-configmaps-fd4e8aa4-4a4d-49ea-90d9-3b15ffbe1064": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078860597s STEP: Saw pod success Jan 30 12:56:50.959: INFO: Pod "pod-projected-configmaps-fd4e8aa4-4a4d-49ea-90d9-3b15ffbe1064" satisfied condition "success or failure" Jan 30 12:56:50.964: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-fd4e8aa4-4a4d-49ea-90d9-3b15ffbe1064 container projected-configmap-volume-test: STEP: delete the pod Jan 30 12:56:51.223: INFO: Waiting for pod pod-projected-configmaps-fd4e8aa4-4a4d-49ea-90d9-3b15ffbe1064 to disappear Jan 30 12:56:51.234: INFO: Pod pod-projected-configmaps-fd4e8aa4-4a4d-49ea-90d9-3b15ffbe1064 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 12:56:51.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7597" for this suite. Jan 30 12:56:57.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 12:56:57.433: INFO: namespace projected-7597 deletion completed in 6.187515691s • [SLOW TEST:16.694 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 12:56:57.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 30 12:57:06.781: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 12:57:07.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6240" for this suite. Jan 30 12:57:29.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 12:57:30.019: INFO: namespace replicaset-6240 deletion completed in 22.176040085s • [SLOW TEST:32.585 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 12:57:30.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6729 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 30 12:57:30.081: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 30 12:58:08.339: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6729 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 12:58:08.339: INFO: >>> kubeConfig: /root/.kube/config I0130 12:58:08.449437 8 log.go:172] (0xc0018e4bb0) (0xc00244b9a0) Create stream I0130 12:58:08.449802 8 log.go:172] (0xc0018e4bb0) (0xc00244b9a0) Stream added, broadcasting: 1 I0130 12:58:08.469372 8 log.go:172] (0xc0018e4bb0) Reply frame received for 1 I0130 12:58:08.469928 8 log.go:172] (0xc0018e4bb0) (0xc001decbe0) Create stream I0130 12:58:08.469960 8 log.go:172] (0xc0018e4bb0) (0xc001decbe0) Stream added, broadcasting: 3 I0130 12:58:08.477428 8 log.go:172] (0xc0018e4bb0) Reply frame received for 3 I0130 12:58:08.477694 8 log.go:172] (0xc0018e4bb0) (0xc00244ba40) Create stream I0130 12:58:08.477730 8 log.go:172] (0xc0018e4bb0) (0xc00244ba40) Stream added, broadcasting: 5 I0130 12:58:08.480312 8 log.go:172] (0xc0018e4bb0) Reply frame received for 5 I0130 12:58:08.811096 8 log.go:172] (0xc0018e4bb0) Data frame received for 3 I0130 12:58:08.811203 8 log.go:172] (0xc001decbe0) (3) Data frame handling I0130 12:58:08.811226 8 log.go:172] (0xc001decbe0) (3) Data frame sent I0130 12:58:09.012779 8 log.go:172] (0xc0018e4bb0) (0xc001decbe0) Stream removed, broadcasting: 3 I0130 12:58:09.013152 8 log.go:172] (0xc0018e4bb0) Data frame received for 1 I0130 12:58:09.013184 8 log.go:172] (0xc00244b9a0) (1) Data frame handling I0130 12:58:09.013234 8 log.go:172] (0xc00244b9a0) (1) Data frame sent I0130 12:58:09.013285 8 log.go:172] (0xc0018e4bb0) (0xc00244b9a0) Stream removed, broadcasting: 1 I0130 12:58:09.015067 8 log.go:172] (0xc0018e4bb0) (0xc00244ba40) Stream removed, broadcasting: 5 I0130 12:58:09.015131 8 log.go:172] (0xc0018e4bb0) Go away received I0130 12:58:09.016578 8 log.go:172] (0xc0018e4bb0) (0xc00244b9a0) Stream removed, broadcasting: 1 I0130 12:58:09.016746 8 log.go:172] (0xc0018e4bb0) (0xc001decbe0) Stream removed, broadcasting: 3 I0130 12:58:09.016768 8 log.go:172] (0xc0018e4bb0) (0xc00244ba40) Stream removed, broadcasting: 5 Jan 30 12:58:09.016: INFO: Found all expected endpoints: [netserver-0] Jan 30 12:58:09.038: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6729 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 12:58:09.038: INFO: >>> kubeConfig: /root/.kube/config I0130 12:58:09.128724 8 log.go:172] (0xc0020e2fd0) (0xc0016f1d60) Create stream I0130 12:58:09.128965 8 log.go:172] (0xc0020e2fd0) (0xc0016f1d60) Stream added, broadcasting: 1 I0130 12:58:09.138530 8 log.go:172] (0xc0020e2fd0) Reply frame received for 1 I0130 12:58:09.138640 8 log.go:172] (0xc0020e2fd0) (0xc00238d4a0) Create stream I0130 12:58:09.138658 8 log.go:172] (0xc0020e2fd0) (0xc00238d4a0) Stream added, broadcasting: 3 I0130 12:58:09.140535 8 log.go:172] (0xc0020e2fd0) Reply frame received for 3 I0130 12:58:09.140569 8 log.go:172] (0xc0020e2fd0) (0xc001decd20) Create stream I0130 12:58:09.140586 8 log.go:172] (0xc0020e2fd0) (0xc001decd20) Stream added, broadcasting: 5 I0130 12:58:09.144281 8 log.go:172] (0xc0020e2fd0) Reply frame received for 5 I0130 12:58:09.274149 8 log.go:172] (0xc0020e2fd0) Data frame received for 3 I0130 12:58:09.274277 8 log.go:172] (0xc00238d4a0) (3) Data frame handling I0130 12:58:09.274311 8 log.go:172] (0xc00238d4a0) (3) Data frame sent I0130 12:58:09.434788 8 log.go:172] (0xc0020e2fd0) (0xc00238d4a0) Stream removed, broadcasting: 3 I0130 12:58:09.435061 8 log.go:172] (0xc0020e2fd0) Data frame received for 1 I0130 12:58:09.435093 8 log.go:172] (0xc0016f1d60) (1) Data frame handling I0130 12:58:09.435120 8 log.go:172] (0xc0016f1d60) (1) Data frame sent I0130 12:58:09.435165 8 log.go:172] (0xc0020e2fd0) (0xc0016f1d60) Stream removed, broadcasting: 1 I0130 12:58:09.435608 8 log.go:172] (0xc0020e2fd0) (0xc001decd20) Stream removed, broadcasting: 5 I0130 12:58:09.435645 8 log.go:172] (0xc0020e2fd0) Go away received I0130 12:58:09.436055 8 log.go:172] (0xc0020e2fd0) (0xc0016f1d60) Stream removed, broadcasting: 1 I0130 12:58:09.436109 8 log.go:172] (0xc0020e2fd0) (0xc00238d4a0) Stream removed, broadcasting: 3 I0130 12:58:09.436131 8 log.go:172] (0xc0020e2fd0) (0xc001decd20) Stream removed, broadcasting: 5 Jan 30 12:58:09.436: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 12:58:09.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6729" for this suite. Jan 30 12:58:21.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 12:58:21.588: INFO: namespace pod-network-test-6729 deletion completed in 12.13982269s • [SLOW TEST:51.569 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 12:58:21.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-7ffc9e5e-9849-4f3f-93a2-00aa44562a62 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 12:58:21.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7161" for this suite. Jan 30 12:58:27.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 12:58:28.000: INFO: namespace secrets-7161 deletion completed in 6.258855874s • [SLOW TEST:6.411 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 12:58:28.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 12:58:36.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7783" for this suite. Jan 30 12:59:28.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 12:59:28.435: INFO: namespace kubelet-test-7783 deletion completed in 52.231652963s • [SLOW TEST:60.435 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 12:59:28.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jan 30 12:59:28.559: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 30 12:59:28.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4153' Jan 30 12:59:30.887: INFO: stderr: "" Jan 30 12:59:30.887: INFO: stdout: "service/redis-slave created\n" Jan 30 12:59:30.888: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 30 12:59:30.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4153' Jan 30 12:59:31.442: INFO: stderr: "" Jan 30 12:59:31.443: INFO: stdout: "service/redis-master created\n" Jan 30 12:59:31.444: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 30 12:59:31.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4153' Jan 30 12:59:32.744: INFO: stderr: "" Jan 30 12:59:32.744: INFO: stdout: "service/frontend created\n" Jan 30 12:59:32.747: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 30 12:59:32.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4153' Jan 30 12:59:33.436: INFO: stderr: "" Jan 30 12:59:33.436: INFO: stdout: "deployment.apps/frontend created\n" Jan 30 12:59:33.437: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 30 12:59:33.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4153' Jan 30 12:59:34.074: INFO: stderr: "" Jan 30 12:59:34.074: INFO: stdout: "deployment.apps/redis-master created\n" Jan 30 12:59:34.076: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 30 12:59:34.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4153' Jan 30 12:59:35.149: INFO: stderr: "" Jan 30 12:59:35.149: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jan 30 12:59:35.149: INFO: Waiting for all frontend pods to be Running. Jan 30 13:00:00.203: INFO: Waiting for frontend to serve content. Jan 30 13:00:00.287: INFO: Trying to add a new entry to the guestbook. Jan 30 13:00:00.357: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 30 13:00:00.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4153' Jan 30 13:00:00.650: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 13:00:00.650: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 30 13:00:00.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4153' Jan 30 13:00:00.891: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 13:00:00.892: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 30 13:00:00.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4153' Jan 30 13:00:01.163: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 13:00:01.164: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 30 13:00:01.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4153' Jan 30 13:00:01.298: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 13:00:01.298: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 30 13:00:01.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4153' Jan 30 13:00:01.430: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 13:00:01.431: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 30 13:00:01.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4153' Jan 30 13:00:03.520: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 13:00:03.520: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:00:03.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4153" for this suite. Jan 30 13:00:51.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:00:51.871: INFO: namespace kubectl-4153 deletion completed in 48.338190649s • [SLOW TEST:83.435 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:00:51.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-41ec9b2c-13a5-4f54-ac2e-e5ffcb68371b STEP: Creating a pod to test consume secrets Jan 30 13:00:52.072: INFO: Waiting up to 5m0s for pod "pod-secrets-c5f927ce-4ad4-43be-80ef-02f9563c87a4" in namespace "secrets-4709" to be "success or failure" Jan 30 13:00:52.192: INFO: Pod "pod-secrets-c5f927ce-4ad4-43be-80ef-02f9563c87a4": Phase="Pending", Reason="", readiness=false. Elapsed: 118.99147ms Jan 30 13:00:54.205: INFO: Pod "pod-secrets-c5f927ce-4ad4-43be-80ef-02f9563c87a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131697507s Jan 30 13:00:56.218: INFO: Pod "pod-secrets-c5f927ce-4ad4-43be-80ef-02f9563c87a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145517005s Jan 30 13:00:58.231: INFO: Pod "pod-secrets-c5f927ce-4ad4-43be-80ef-02f9563c87a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158060259s Jan 30 13:01:00.245: INFO: Pod "pod-secrets-c5f927ce-4ad4-43be-80ef-02f9563c87a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172508955s Jan 30 13:01:02.259: INFO: Pod "pod-secrets-c5f927ce-4ad4-43be-80ef-02f9563c87a4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.186266559s Jan 30 13:01:04.268: INFO: Pod "pod-secrets-c5f927ce-4ad4-43be-80ef-02f9563c87a4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.194776637s Jan 30 13:01:06.277: INFO: Pod "pod-secrets-c5f927ce-4ad4-43be-80ef-02f9563c87a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.204429068s STEP: Saw pod success Jan 30 13:01:06.278: INFO: Pod "pod-secrets-c5f927ce-4ad4-43be-80ef-02f9563c87a4" satisfied condition "success or failure" Jan 30 13:01:06.284: INFO: Trying to get logs from node iruya-node pod pod-secrets-c5f927ce-4ad4-43be-80ef-02f9563c87a4 container secret-volume-test: STEP: delete the pod Jan 30 13:01:06.562: INFO: Waiting for pod pod-secrets-c5f927ce-4ad4-43be-80ef-02f9563c87a4 to disappear Jan 30 13:01:06.617: INFO: Pod pod-secrets-c5f927ce-4ad4-43be-80ef-02f9563c87a4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:01:06.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4709" for this suite. Jan 30 13:01:12.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:01:12.831: INFO: namespace secrets-4709 deletion completed in 6.205206137s • [SLOW TEST:20.959 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:01:12.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 30 13:01:13.140: INFO: Number of nodes with available pods: 0 Jan 30 13:01:13.140: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:01:14.645: INFO: Number of nodes with available pods: 0 Jan 30 13:01:14.645: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:01:15.282: INFO: Number of nodes with available pods: 0 Jan 30 13:01:15.282: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:01:16.166: INFO: Number of nodes with available pods: 0 Jan 30 13:01:16.166: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:01:17.170: INFO: Number of nodes with available pods: 0 Jan 30 13:01:17.170: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:01:18.158: INFO: Number of nodes with available pods: 0 Jan 30 13:01:18.158: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:01:19.663: INFO: Number of nodes with available pods: 0 Jan 30 13:01:19.663: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:01:21.230: INFO: Number of nodes with available pods: 0 Jan 30 13:01:21.231: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:01:22.593: INFO: Number of nodes with available pods: 0 Jan 30 13:01:22.593: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:01:23.151: INFO: Number of nodes with available pods: 0 Jan 30 13:01:23.151: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:01:24.161: INFO: Number of nodes with available pods: 0 Jan 30 13:01:24.161: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:01:25.156: INFO: Number of nodes with available pods: 1 Jan 30 13:01:25.157: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:01:26.160: INFO: Number of nodes with available pods: 2 Jan 30 13:01:26.160: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 30 13:01:26.224: INFO: Number of nodes with available pods: 1 Jan 30 13:01:26.224: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:27.237: INFO: Number of nodes with available pods: 1 Jan 30 13:01:27.237: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:28.250: INFO: Number of nodes with available pods: 1 Jan 30 13:01:28.250: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:29.329: INFO: Number of nodes with available pods: 1 Jan 30 13:01:29.329: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:30.245: INFO: Number of nodes with available pods: 1 Jan 30 13:01:30.245: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:31.247: INFO: Number of nodes with available pods: 1 Jan 30 13:01:31.247: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:32.239: INFO: Number of nodes with available pods: 1 Jan 30 13:01:32.239: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:33.244: INFO: Number of nodes with available pods: 1 Jan 30 13:01:33.244: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:34.240: INFO: Number of nodes with available pods: 1 Jan 30 13:01:34.240: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:35.239: INFO: Number of nodes with available pods: 1 Jan 30 13:01:35.239: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:36.245: INFO: Number of nodes with available pods: 1 Jan 30 13:01:36.245: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:37.240: INFO: Number of nodes with available pods: 1 Jan 30 13:01:37.240: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:38.250: INFO: Number of nodes with available pods: 1 Jan 30 13:01:38.250: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:39.686: INFO: Number of nodes with available pods: 1 Jan 30 13:01:39.686: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:40.856: INFO: Number of nodes with available pods: 1 Jan 30 13:01:40.856: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:41.240: INFO: Number of nodes with available pods: 1 Jan 30 13:01:41.240: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:42.266: INFO: Number of nodes with available pods: 1 Jan 30 13:01:42.266: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:44.689: INFO: Number of nodes with available pods: 1 Jan 30 13:01:44.690: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:45.243: INFO: Number of nodes with available pods: 1 Jan 30 13:01:45.243: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:46.236: INFO: Number of nodes with available pods: 1 Jan 30 13:01:46.236: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:47.241: INFO: Number of nodes with available pods: 1 Jan 30 13:01:47.242: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:01:48.243: INFO: Number of nodes with available pods: 2 Jan 30 13:01:48.243: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2456, will wait for the garbage collector to delete the pods Jan 30 13:01:48.349: INFO: Deleting DaemonSet.extensions daemon-set took: 46.178304ms Jan 30 13:01:48.650: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.874523ms Jan 30 13:01:57.923: INFO: Number of nodes with available pods: 0 Jan 30 13:01:57.923: INFO: Number of running nodes: 0, number of available pods: 0 Jan 30 13:01:57.939: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2456/daemonsets","resourceVersion":"22435951"},"items":null} Jan 30 13:01:57.947: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2456/pods","resourceVersion":"22435951"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:01:57.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2456" for this suite. Jan 30 13:02:03.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:02:04.103: INFO: namespace daemonsets-2456 deletion completed in 6.133320887s • [SLOW TEST:51.270 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:02:04.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 30 13:02:04.208: INFO: Waiting up to 5m0s for pod "pod-d1206a9c-28d7-4490-8142-213be79388c0" in namespace "emptydir-1195" to be "success or failure" Jan 30 13:02:04.255: INFO: Pod "pod-d1206a9c-28d7-4490-8142-213be79388c0": Phase="Pending", Reason="", readiness=false. Elapsed: 46.474499ms Jan 30 13:02:06.271: INFO: Pod "pod-d1206a9c-28d7-4490-8142-213be79388c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062946393s Jan 30 13:02:08.279: INFO: Pod "pod-d1206a9c-28d7-4490-8142-213be79388c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07084242s Jan 30 13:02:10.290: INFO: Pod "pod-d1206a9c-28d7-4490-8142-213be79388c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082076336s Jan 30 13:02:12.314: INFO: Pod "pod-d1206a9c-28d7-4490-8142-213be79388c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106043809s STEP: Saw pod success Jan 30 13:02:12.315: INFO: Pod "pod-d1206a9c-28d7-4490-8142-213be79388c0" satisfied condition "success or failure" Jan 30 13:02:12.332: INFO: Trying to get logs from node iruya-node pod pod-d1206a9c-28d7-4490-8142-213be79388c0 container test-container: STEP: delete the pod Jan 30 13:02:12.458: INFO: Waiting for pod pod-d1206a9c-28d7-4490-8142-213be79388c0 to disappear Jan 30 13:02:12.464: INFO: Pod pod-d1206a9c-28d7-4490-8142-213be79388c0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:02:12.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1195" for this suite. Jan 30 13:02:18.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:02:18.714: INFO: namespace emptydir-1195 deletion completed in 6.244911002s • [SLOW TEST:14.610 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:02:18.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 30 13:02:18.820: INFO: Waiting up to 5m0s for pod "pod-767ad019-bd57-4f17-8b1e-960135711da9" in namespace "emptydir-7301" to be "success or failure" Jan 30 13:02:18.829: INFO: Pod "pod-767ad019-bd57-4f17-8b1e-960135711da9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.155859ms Jan 30 13:02:20.840: INFO: Pod "pod-767ad019-bd57-4f17-8b1e-960135711da9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02049198s Jan 30 13:02:22.856: INFO: Pod "pod-767ad019-bd57-4f17-8b1e-960135711da9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03575338s Jan 30 13:02:24.876: INFO: Pod "pod-767ad019-bd57-4f17-8b1e-960135711da9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056077428s Jan 30 13:02:26.891: INFO: Pod "pod-767ad019-bd57-4f17-8b1e-960135711da9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071515065s STEP: Saw pod success Jan 30 13:02:26.892: INFO: Pod "pod-767ad019-bd57-4f17-8b1e-960135711da9" satisfied condition "success or failure" Jan 30 13:02:26.897: INFO: Trying to get logs from node iruya-node pod pod-767ad019-bd57-4f17-8b1e-960135711da9 container test-container: STEP: delete the pod Jan 30 13:02:27.006: INFO: Waiting for pod pod-767ad019-bd57-4f17-8b1e-960135711da9 to disappear Jan 30 13:02:27.018: INFO: Pod pod-767ad019-bd57-4f17-8b1e-960135711da9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:02:27.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7301" for this suite. Jan 30 13:02:33.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:02:33.258: INFO: namespace emptydir-7301 deletion completed in 6.233028334s • [SLOW TEST:14.543 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:02:33.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-f140a5be-bf1b-41f6-83fb-822f20e32624 STEP: Creating a pod to test consume secrets Jan 30 13:02:33.441: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d23b2c40-8215-46ae-bdd4-0b6c4c7a3483" in namespace "projected-762" to be "success or failure" Jan 30 13:02:33.447: INFO: Pod "pod-projected-secrets-d23b2c40-8215-46ae-bdd4-0b6c4c7a3483": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137359ms Jan 30 13:02:35.454: INFO: Pod "pod-projected-secrets-d23b2c40-8215-46ae-bdd4-0b6c4c7a3483": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013061676s Jan 30 13:02:37.463: INFO: Pod "pod-projected-secrets-d23b2c40-8215-46ae-bdd4-0b6c4c7a3483": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022221025s Jan 30 13:02:39.475: INFO: Pod "pod-projected-secrets-d23b2c40-8215-46ae-bdd4-0b6c4c7a3483": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034092133s Jan 30 13:02:41.483: INFO: Pod "pod-projected-secrets-d23b2c40-8215-46ae-bdd4-0b6c4c7a3483": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041826614s Jan 30 13:02:43.495: INFO: Pod "pod-projected-secrets-d23b2c40-8215-46ae-bdd4-0b6c4c7a3483": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054525248s STEP: Saw pod success Jan 30 13:02:43.496: INFO: Pod "pod-projected-secrets-d23b2c40-8215-46ae-bdd4-0b6c4c7a3483" satisfied condition "success or failure" Jan 30 13:02:43.501: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-d23b2c40-8215-46ae-bdd4-0b6c4c7a3483 container projected-secret-volume-test: STEP: delete the pod Jan 30 13:02:43.548: INFO: Waiting for pod pod-projected-secrets-d23b2c40-8215-46ae-bdd4-0b6c4c7a3483 to disappear Jan 30 13:02:43.552: INFO: Pod pod-projected-secrets-d23b2c40-8215-46ae-bdd4-0b6c4c7a3483 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:02:43.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-762" for this suite. Jan 30 13:02:49.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:02:49.714: INFO: namespace projected-762 deletion completed in 6.159093811s • [SLOW TEST:16.456 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:02:49.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:02:59.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5102" for this suite. Jan 30 13:03:21.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:03:21.143: INFO: namespace replication-controller-5102 deletion completed in 22.118334957s • [SLOW TEST:31.428 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:03:21.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-17885cc0-ae8b-4ab9-9448-c4e3d7f10ce8 STEP: Creating secret with name s-test-opt-upd-cb3945ac-2fd7-427a-b35f-e81a1b01a0d4 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-17885cc0-ae8b-4ab9-9448-c4e3d7f10ce8 STEP: Updating secret s-test-opt-upd-cb3945ac-2fd7-427a-b35f-e81a1b01a0d4 STEP: Creating secret with name s-test-opt-create-8a86dff1-e6ab-41fe-8db2-7ca59cac022d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:04:53.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4238" for this suite. Jan 30 13:05:17.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:05:18.093: INFO: namespace secrets-4238 deletion completed in 24.242221443s • [SLOW TEST:116.949 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:05:18.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jan 30 13:05:18.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-444' Jan 30 13:05:18.811: INFO: stderr: "" Jan 30 13:05:18.811: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 30 13:05:18.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-444' Jan 30 13:05:19.007: INFO: stderr: "" Jan 30 13:05:19.008: INFO: stdout: "update-demo-nautilus-cplff update-demo-nautilus-vggk9 " Jan 30 13:05:19.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cplff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-444' Jan 30 13:05:19.320: INFO: stderr: "" Jan 30 13:05:19.321: INFO: stdout: "" Jan 30 13:05:19.321: INFO: update-demo-nautilus-cplff is created but not running Jan 30 13:05:24.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-444' Jan 30 13:05:24.493: INFO: stderr: "" Jan 30 13:05:24.494: INFO: stdout: "update-demo-nautilus-cplff update-demo-nautilus-vggk9 " Jan 30 13:05:24.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cplff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-444' Jan 30 13:05:24.638: INFO: stderr: "" Jan 30 13:05:24.638: INFO: stdout: "" Jan 30 13:05:24.638: INFO: update-demo-nautilus-cplff is created but not running Jan 30 13:05:29.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-444' Jan 30 13:05:30.198: INFO: stderr: "" Jan 30 13:05:30.198: INFO: stdout: "update-demo-nautilus-cplff update-demo-nautilus-vggk9 " Jan 30 13:05:30.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cplff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-444' Jan 30 13:05:30.527: INFO: stderr: "" Jan 30 13:05:30.528: INFO: stdout: "" Jan 30 13:05:30.528: INFO: update-demo-nautilus-cplff is created but not running Jan 30 13:05:35.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-444' Jan 30 13:05:35.686: INFO: stderr: "" Jan 30 13:05:35.686: INFO: stdout: "update-demo-nautilus-cplff update-demo-nautilus-vggk9 " Jan 30 13:05:35.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cplff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-444' Jan 30 13:05:35.832: INFO: stderr: "" Jan 30 13:05:35.832: INFO: stdout: "true" Jan 30 13:05:35.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cplff -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-444' Jan 30 13:05:35.962: INFO: stderr: "" Jan 30 13:05:35.963: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 30 13:05:35.963: INFO: validating pod update-demo-nautilus-cplff Jan 30 13:05:35.990: INFO: got data: { "image": "nautilus.jpg" } Jan 30 13:05:35.990: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 30 13:05:35.990: INFO: update-demo-nautilus-cplff is verified up and running Jan 30 13:05:35.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vggk9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-444' Jan 30 13:05:36.109: INFO: stderr: "" Jan 30 13:05:36.110: INFO: stdout: "true" Jan 30 13:05:36.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vggk9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-444' Jan 30 13:05:36.200: INFO: stderr: "" Jan 30 13:05:36.201: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 30 13:05:36.201: INFO: validating pod update-demo-nautilus-vggk9 Jan 30 13:05:36.229: INFO: got data: { "image": "nautilus.jpg" } Jan 30 13:05:36.229: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 30 13:05:36.229: INFO: update-demo-nautilus-vggk9 is verified up and running STEP: rolling-update to new replication controller Jan 30 13:05:36.231: INFO: scanned /root for discovery docs: Jan 30 13:05:36.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-444' Jan 30 13:06:14.811: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 30 13:06:14.811: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 30 13:06:14.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-444' Jan 30 13:06:15.084: INFO: stderr: "" Jan 30 13:06:15.084: INFO: stdout: "update-demo-kitten-55xst update-demo-kitten-z8mw8 update-demo-nautilus-cplff " STEP: Replicas for name=update-demo: expected=2 actual=3 Jan 30 13:06:20.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-444' Jan 30 13:06:20.289: INFO: stderr: "" Jan 30 13:06:20.289: INFO: stdout: "update-demo-kitten-55xst update-demo-kitten-z8mw8 " Jan 30 13:06:20.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-55xst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-444' Jan 30 13:06:20.420: INFO: stderr: "" Jan 30 13:06:20.420: INFO: stdout: "true" Jan 30 13:06:20.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-55xst -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-444' Jan 30 13:06:20.644: INFO: stderr: "" Jan 30 13:06:20.644: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 30 13:06:20.644: INFO: validating pod update-demo-kitten-55xst Jan 30 13:06:20.702: INFO: got data: { "image": "kitten.jpg" } Jan 30 13:06:20.702: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 30 13:06:20.702: INFO: update-demo-kitten-55xst is verified up and running Jan 30 13:06:20.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-z8mw8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-444' Jan 30 13:06:20.900: INFO: stderr: "" Jan 30 13:06:20.901: INFO: stdout: "true" Jan 30 13:06:20.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-z8mw8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-444' Jan 30 13:06:21.087: INFO: stderr: "" Jan 30 13:06:21.087: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 30 13:06:21.087: INFO: validating pod update-demo-kitten-z8mw8 Jan 30 13:06:21.100: INFO: got data: { "image": "kitten.jpg" } Jan 30 13:06:21.100: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 30 13:06:21.100: INFO: update-demo-kitten-z8mw8 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:06:21.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-444" for this suite. Jan 30 13:06:45.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:06:45.267: INFO: namespace kubectl-444 deletion completed in 24.16316971s • [SLOW TEST:87.173 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:06:45.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 30 13:06:45.480: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"87834217-c876-4f60-b263-b13cd0774ba7", Controller:(*bool)(0xc0021a0742), BlockOwnerDeletion:(*bool)(0xc0021a0743)}} Jan 30 13:06:45.592: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"5a1e3501-e493-47fe-a1d4-a8e37455445f", Controller:(*bool)(0xc002c020aa), BlockOwnerDeletion:(*bool)(0xc002c020ab)}} Jan 30 13:06:45.611: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b2162790-779c-44ba-b152-02ac1354e420", Controller:(*bool)(0xc002c0239a), BlockOwnerDeletion:(*bool)(0xc002c0239b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:06:50.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4355" for this suite. Jan 30 13:06:56.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:06:57.000: INFO: namespace gc-4355 deletion completed in 6.203492953s • [SLOW TEST:11.733 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:06:57.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 30 13:06:57.084: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:06:58.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-892" for this suite. Jan 30 13:07:04.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:07:04.634: INFO: namespace replication-controller-892 deletion completed in 6.265614802s • [SLOW TEST:7.634 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:07:04.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Jan 30 13:07:16.959: INFO: Pod pod-hostip-7768b012-d7fe-49d8-96d7-a662307abcf6 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:07:16.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8857" for this suite. Jan 30 13:07:38.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:07:39.089: INFO: namespace pods-8857 deletion completed in 22.124256793s • [SLOW TEST:34.455 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:07:39.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 30 13:07:39.231: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 30 13:07:39.280: INFO: Number of nodes with available pods: 0 Jan 30 13:07:39.280: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:07:40.968: INFO: Number of nodes with available pods: 0 Jan 30 13:07:40.968: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:07:41.295: INFO: Number of nodes with available pods: 0 Jan 30 13:07:41.295: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:07:42.373: INFO: Number of nodes with available pods: 0 Jan 30 13:07:42.373: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:07:43.291: INFO: Number of nodes with available pods: 0 Jan 30 13:07:43.291: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:07:44.311: INFO: Number of nodes with available pods: 0 Jan 30 13:07:44.311: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:07:45.799: INFO: Number of nodes with available pods: 0 Jan 30 13:07:45.799: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:07:46.352: INFO: Number of nodes with available pods: 0 Jan 30 13:07:46.352: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:07:47.295: INFO: Number of nodes with available pods: 0 Jan 30 13:07:47.295: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:07:48.303: INFO: Number of nodes with available pods: 0 Jan 30 13:07:48.303: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:07:49.297: INFO: Number of nodes with available pods: 1 Jan 30 13:07:49.297: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:07:50.341: INFO: Number of nodes with available pods: 2 Jan 30 13:07:50.342: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 30 13:07:50.414: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:50.414: INFO: Wrong image for pod: daemon-set-x6zkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:51.449: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:51.449: INFO: Wrong image for pod: daemon-set-x6zkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:52.507: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:52.507: INFO: Wrong image for pod: daemon-set-x6zkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:53.446: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:53.447: INFO: Wrong image for pod: daemon-set-x6zkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:54.452: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:54.452: INFO: Wrong image for pod: daemon-set-x6zkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:55.455: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:55.455: INFO: Wrong image for pod: daemon-set-x6zkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:56.441: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:56.442: INFO: Wrong image for pod: daemon-set-x6zkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:57.445: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:57.445: INFO: Wrong image for pod: daemon-set-x6zkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:58.443: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:58.443: INFO: Wrong image for pod: daemon-set-x6zkd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:58.443: INFO: Pod daemon-set-x6zkd is not available Jan 30 13:07:59.448: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:07:59.448: INFO: Pod daemon-set-pnh7c is not available Jan 30 13:08:00.472: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:00.472: INFO: Pod daemon-set-pnh7c is not available Jan 30 13:08:01.455: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:01.455: INFO: Pod daemon-set-pnh7c is not available Jan 30 13:08:02.447: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:02.447: INFO: Pod daemon-set-pnh7c is not available Jan 30 13:08:03.445: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:03.445: INFO: Pod daemon-set-pnh7c is not available Jan 30 13:08:04.443: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:04.444: INFO: Pod daemon-set-pnh7c is not available Jan 30 13:08:05.456: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:05.456: INFO: Pod daemon-set-pnh7c is not available Jan 30 13:08:06.444: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:06.444: INFO: Pod daemon-set-pnh7c is not available Jan 30 13:08:07.448: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:08.484: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:09.445: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:10.443: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:11.445: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:11.445: INFO: Pod daemon-set-lp79v is not available Jan 30 13:08:12.445: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:12.446: INFO: Pod daemon-set-lp79v is not available Jan 30 13:08:13.449: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:13.449: INFO: Pod daemon-set-lp79v is not available Jan 30 13:08:14.447: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:14.447: INFO: Pod daemon-set-lp79v is not available Jan 30 13:08:15.442: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:15.442: INFO: Pod daemon-set-lp79v is not available Jan 30 13:08:16.444: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:16.444: INFO: Pod daemon-set-lp79v is not available Jan 30 13:08:17.446: INFO: Wrong image for pod: daemon-set-lp79v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 30 13:08:17.446: INFO: Pod daemon-set-lp79v is not available Jan 30 13:08:19.442: INFO: Pod daemon-set-hsfng is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 30 13:08:20.303: INFO: Number of nodes with available pods: 1 Jan 30 13:08:20.303: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:08:21.319: INFO: Number of nodes with available pods: 1 Jan 30 13:08:21.319: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:08:22.344: INFO: Number of nodes with available pods: 1 Jan 30 13:08:22.344: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:08:23.318: INFO: Number of nodes with available pods: 1 Jan 30 13:08:23.318: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:08:24.319: INFO: Number of nodes with available pods: 1 Jan 30 13:08:24.319: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:08:25.321: INFO: Number of nodes with available pods: 1 Jan 30 13:08:25.321: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:08:26.516: INFO: Number of nodes with available pods: 1 Jan 30 13:08:26.516: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:08:27.317: INFO: Number of nodes with available pods: 1 Jan 30 13:08:27.317: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:08:28.325: INFO: Number of nodes with available pods: 1 Jan 30 13:08:28.325: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 30 13:08:29.357: INFO: Number of nodes with available pods: 2 Jan 30 13:08:29.357: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3539, will wait for the garbage collector to delete the pods Jan 30 13:08:29.489: INFO: Deleting DaemonSet.extensions daemon-set took: 22.225201ms Jan 30 13:08:29.890: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.267525ms Jan 30 13:08:37.398: INFO: Number of nodes with available pods: 0 Jan 30 13:08:37.398: INFO: Number of running nodes: 0, number of available pods: 0 Jan 30 13:08:37.401: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3539/daemonsets","resourceVersion":"22436939"},"items":null} Jan 30 13:08:37.404: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3539/pods","resourceVersion":"22436939"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:08:37.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3539" for this suite. Jan 30 13:08:43.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:08:43.574: INFO: namespace daemonsets-3539 deletion completed in 6.153829871s • [SLOW TEST:64.485 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:08:43.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jan 30 13:08:43.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1213' Jan 30 13:08:44.025: INFO: stderr: "" Jan 30 13:08:44.026: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 30 13:08:44.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1213' Jan 30 13:08:44.350: INFO: stderr: "" Jan 30 13:08:44.350: INFO: stdout: "update-demo-nautilus-mbv6p update-demo-nautilus-tgt27 " Jan 30 13:08:44.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mbv6p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1213' Jan 30 13:08:44.525: INFO: stderr: "" Jan 30 13:08:44.525: INFO: stdout: "" Jan 30 13:08:44.525: INFO: update-demo-nautilus-mbv6p is created but not running Jan 30 13:08:49.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1213' Jan 30 13:08:50.444: INFO: stderr: "" Jan 30 13:08:50.444: INFO: stdout: "update-demo-nautilus-mbv6p update-demo-nautilus-tgt27 " Jan 30 13:08:50.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mbv6p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1213' Jan 30 13:08:50.781: INFO: stderr: "" Jan 30 13:08:50.782: INFO: stdout: "" Jan 30 13:08:50.782: INFO: update-demo-nautilus-mbv6p is created but not running Jan 30 13:08:55.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1213' Jan 30 13:08:55.946: INFO: stderr: "" Jan 30 13:08:55.946: INFO: stdout: "update-demo-nautilus-mbv6p update-demo-nautilus-tgt27 " Jan 30 13:08:55.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mbv6p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1213' Jan 30 13:08:56.142: INFO: stderr: "" Jan 30 13:08:56.142: INFO: stdout: "true" Jan 30 13:08:56.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mbv6p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1213' Jan 30 13:08:56.320: INFO: stderr: "" Jan 30 13:08:56.320: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 30 13:08:56.320: INFO: validating pod update-demo-nautilus-mbv6p Jan 30 13:08:56.340: INFO: got data: { "image": "nautilus.jpg" } Jan 30 13:08:56.340: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 30 13:08:56.340: INFO: update-demo-nautilus-mbv6p is verified up and running Jan 30 13:08:56.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgt27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1213' Jan 30 13:08:56.517: INFO: stderr: "" Jan 30 13:08:56.518: INFO: stdout: "true" Jan 30 13:08:56.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgt27 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1213' Jan 30 13:08:56.717: INFO: stderr: "" Jan 30 13:08:56.717: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 30 13:08:56.717: INFO: validating pod update-demo-nautilus-tgt27 Jan 30 13:08:56.731: INFO: got data: { "image": "nautilus.jpg" } Jan 30 13:08:56.731: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 30 13:08:56.731: INFO: update-demo-nautilus-tgt27 is verified up and running STEP: scaling down the replication controller Jan 30 13:08:56.738: INFO: scanned /root for discovery docs: Jan 30 13:08:56.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1213' Jan 30 13:08:58.006: INFO: stderr: "" Jan 30 13:08:58.006: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 30 13:08:58.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1213' Jan 30 13:08:58.230: INFO: stderr: "" Jan 30 13:08:58.230: INFO: stdout: "update-demo-nautilus-mbv6p update-demo-nautilus-tgt27 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 30 13:09:03.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1213' Jan 30 13:09:03.440: INFO: stderr: "" Jan 30 13:09:03.441: INFO: stdout: "update-demo-nautilus-mbv6p update-demo-nautilus-tgt27 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 30 13:09:08.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1213' Jan 30 13:09:08.643: INFO: stderr: "" Jan 30 13:09:08.643: INFO: stdout: "update-demo-nautilus-tgt27 " Jan 30 13:09:08.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgt27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1213' Jan 30 13:09:08.870: INFO: stderr: "" Jan 30 13:09:08.870: INFO: stdout: "true" Jan 30 13:09:08.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgt27 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1213' Jan 30 13:09:09.020: INFO: stderr: "" Jan 30 13:09:09.020: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 30 13:09:09.021: INFO: validating pod update-demo-nautilus-tgt27 Jan 30 13:09:09.026: INFO: got data: { "image": "nautilus.jpg" } Jan 30 13:09:09.026: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 30 13:09:09.026: INFO: update-demo-nautilus-tgt27 is verified up and running STEP: scaling up the replication controller Jan 30 13:09:09.029: INFO: scanned /root for discovery docs: Jan 30 13:09:09.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1213' Jan 30 13:09:10.216: INFO: stderr: "" Jan 30 13:09:10.216: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 30 13:09:10.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1213' Jan 30 13:09:10.391: INFO: stderr: "" Jan 30 13:09:10.392: INFO: stdout: "update-demo-nautilus-tgt27 update-demo-nautilus-wpk9l " Jan 30 13:09:10.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgt27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1213' Jan 30 13:09:10.614: INFO: stderr: "" Jan 30 13:09:10.615: INFO: stdout: "true" Jan 30 13:09:10.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgt27 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1213' Jan 30 13:09:10.733: INFO: stderr: "" Jan 30 13:09:10.733: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 30 13:09:10.733: INFO: validating pod update-demo-nautilus-tgt27 Jan 30 13:09:10.742: INFO: got data: { "image": "nautilus.jpg" } Jan 30 13:09:10.742: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 30 13:09:10.742: INFO: update-demo-nautilus-tgt27 is verified up and running Jan 30 13:09:10.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wpk9l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1213' Jan 30 13:09:10.886: INFO: stderr: "" Jan 30 13:09:10.887: INFO: stdout: "" Jan 30 13:09:10.887: INFO: update-demo-nautilus-wpk9l is created but not running Jan 30 13:09:15.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1213' Jan 30 13:09:16.085: INFO: stderr: "" Jan 30 13:09:16.086: INFO: stdout: "update-demo-nautilus-tgt27 update-demo-nautilus-wpk9l " Jan 30 13:09:16.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgt27 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1213' Jan 30 13:09:16.256: INFO: stderr: "" Jan 30 13:09:16.257: INFO: stdout: "true" Jan 30 13:09:16.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgt27 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1213' Jan 30 13:09:16.451: INFO: stderr: "" Jan 30 13:09:16.451: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 30 13:09:16.451: INFO: validating pod update-demo-nautilus-tgt27 Jan 30 13:09:16.494: INFO: got data: { "image": "nautilus.jpg" } Jan 30 13:09:16.495: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 30 13:09:16.495: INFO: update-demo-nautilus-tgt27 is verified up and running Jan 30 13:09:16.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wpk9l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1213' Jan 30 13:09:16.668: INFO: stderr: "" Jan 30 13:09:16.669: INFO: stdout: "true" Jan 30 13:09:16.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wpk9l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1213' Jan 30 13:09:16.845: INFO: stderr: "" Jan 30 13:09:16.845: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 30 13:09:16.846: INFO: validating pod update-demo-nautilus-wpk9l Jan 30 13:09:16.905: INFO: got data: { "image": "nautilus.jpg" } Jan 30 13:09:16.906: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 30 13:09:16.906: INFO: update-demo-nautilus-wpk9l is verified up and running STEP: using delete to clean up resources Jan 30 13:09:16.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1213' Jan 30 13:09:17.053: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 13:09:17.053: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 30 13:09:17.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1213' Jan 30 13:09:17.194: INFO: stderr: "No resources found.\n" Jan 30 13:09:17.194: INFO: stdout: "" Jan 30 13:09:17.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1213 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 30 13:09:17.298: INFO: stderr: "" Jan 30 13:09:17.299: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:09:17.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1213" for this suite. Jan 30 13:09:41.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:09:41.495: INFO: namespace kubectl-1213 deletion completed in 24.139621127s • [SLOW TEST:57.920 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:09:41.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 30 13:09:41.640: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4559b37a-c5db-4ab3-af49-a0e3313a7bd2" in namespace "projected-7570" to be "success or failure" Jan 30 13:09:41.656: INFO: Pod "downwardapi-volume-4559b37a-c5db-4ab3-af49-a0e3313a7bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.757244ms Jan 30 13:09:43.666: INFO: Pod "downwardapi-volume-4559b37a-c5db-4ab3-af49-a0e3313a7bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025574231s Jan 30 13:09:45.677: INFO: Pod "downwardapi-volume-4559b37a-c5db-4ab3-af49-a0e3313a7bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036540083s Jan 30 13:09:47.690: INFO: Pod "downwardapi-volume-4559b37a-c5db-4ab3-af49-a0e3313a7bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048811877s Jan 30 13:09:49.710: INFO: Pod "downwardapi-volume-4559b37a-c5db-4ab3-af49-a0e3313a7bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069208077s Jan 30 13:09:51.720: INFO: Pod "downwardapi-volume-4559b37a-c5db-4ab3-af49-a0e3313a7bd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079603354s STEP: Saw pod success Jan 30 13:09:51.721: INFO: Pod "downwardapi-volume-4559b37a-c5db-4ab3-af49-a0e3313a7bd2" satisfied condition "success or failure" Jan 30 13:09:51.725: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4559b37a-c5db-4ab3-af49-a0e3313a7bd2 container client-container: STEP: delete the pod Jan 30 13:09:52.181: INFO: Waiting for pod downwardapi-volume-4559b37a-c5db-4ab3-af49-a0e3313a7bd2 to disappear Jan 30 13:09:52.244: INFO: Pod downwardapi-volume-4559b37a-c5db-4ab3-af49-a0e3313a7bd2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:09:52.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7570" for this suite. Jan 30 13:09:58.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:09:58.412: INFO: namespace projected-7570 deletion completed in 6.160524394s • [SLOW TEST:16.917 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:09:58.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 30 13:09:58.471: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:10:16.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6389" for this suite. Jan 30 13:10:22.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:10:22.771: INFO: namespace init-container-6389 deletion completed in 6.58969227s • [SLOW TEST:24.358 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:10:22.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 30 13:10:22.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5690' Jan 30 13:10:24.922: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 30 13:10:24.923: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Jan 30 13:10:25.046: INFO: scanned /root for discovery docs: Jan 30 13:10:25.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5690' Jan 30 13:10:48.473: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 30 13:10:48.474: INFO: stdout: "Created e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e\nScaling up e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jan 30 13:10:48.474: INFO: stdout: "Created e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e\nScaling up e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jan 30 13:10:48.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:10:48.728: INFO: stderr: "" Jan 30 13:10:48.728: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k e2e-test-nginx-rc-jnz49 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 30 13:10:53.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:10:53.951: INFO: stderr: "" Jan 30 13:10:53.951: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k e2e-test-nginx-rc-jnz49 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 30 13:10:58.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:10:59.120: INFO: stderr: "" Jan 30 13:10:59.120: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k e2e-test-nginx-rc-jnz49 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 30 13:11:04.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:11:04.264: INFO: stderr: "" Jan 30 13:11:04.264: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k e2e-test-nginx-rc-jnz49 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 30 13:11:09.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:11:09.440: INFO: stderr: "" Jan 30 13:11:09.440: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k e2e-test-nginx-rc-jnz49 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 30 13:11:14.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:11:14.609: INFO: stderr: "" Jan 30 13:11:14.609: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k e2e-test-nginx-rc-jnz49 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 30 13:11:19.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:11:19.911: INFO: stderr: "" Jan 30 13:11:19.911: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k e2e-test-nginx-rc-jnz49 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 30 13:11:24.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:11:25.039: INFO: stderr: "" Jan 30 13:11:25.040: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k e2e-test-nginx-rc-jnz49 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 30 13:11:30.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:11:30.259: INFO: stderr: "" Jan 30 13:11:30.260: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k e2e-test-nginx-rc-jnz49 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 30 13:11:35.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:11:35.459: INFO: stderr: "" Jan 30 13:11:35.460: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k e2e-test-nginx-rc-jnz49 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 30 13:11:40.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:11:40.697: INFO: stderr: "" Jan 30 13:11:40.697: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k e2e-test-nginx-rc-jnz49 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 30 13:11:45.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:11:45.988: INFO: stderr: "" Jan 30 13:11:45.989: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k e2e-test-nginx-rc-jnz49 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 30 13:11:50.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:11:51.208: INFO: stderr: "" Jan 30 13:11:51.208: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k e2e-test-nginx-rc-jnz49 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 30 13:11:56.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:11:56.344: INFO: stderr: "" Jan 30 13:11:56.344: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k e2e-test-nginx-rc-jnz49 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 30 13:12:01.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:12:01.542: INFO: stderr: "" Jan 30 13:12:01.543: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k e2e-test-nginx-rc-jnz49 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jan 30 13:12:06.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:12:06.703: INFO: stderr: "" Jan 30 13:12:06.703: INFO: stdout: "e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k " Jan 30 13:12:06.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5690' Jan 30 13:12:06.831: INFO: stderr: "" Jan 30 13:12:06.831: INFO: stdout: "true" Jan 30 13:12:06.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5690' Jan 30 13:12:06.972: INFO: stderr: "" Jan 30 13:12:06.972: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jan 30 13:12:06.972: INFO: e2e-test-nginx-rc-de45c88a972044ce793f21cb5368ad7e-jnz2k is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jan 30 13:12:06.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5690' Jan 30 13:12:07.161: INFO: stderr: "" Jan 30 13:12:07.161: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:12:07.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5690" for this suite. Jan 30 13:12:29.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:12:29.411: INFO: namespace kubectl-5690 deletion completed in 22.23297414s • [SLOW TEST:126.639 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:12:29.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 30 13:12:29.569: INFO: Waiting up to 5m0s for pod "pod-e96c990e-64ab-4fd1-9508-3966d9bde6b2" in namespace "emptydir-5537" to be "success or failure" Jan 30 13:12:29.576: INFO: Pod "pod-e96c990e-64ab-4fd1-9508-3966d9bde6b2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.350686ms Jan 30 13:12:31.586: INFO: Pod "pod-e96c990e-64ab-4fd1-9508-3966d9bde6b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016762306s Jan 30 13:12:33.596: INFO: Pod "pod-e96c990e-64ab-4fd1-9508-3966d9bde6b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027589037s Jan 30 13:12:35.608: INFO: Pod "pod-e96c990e-64ab-4fd1-9508-3966d9bde6b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03882407s Jan 30 13:12:37.628: INFO: Pod "pod-e96c990e-64ab-4fd1-9508-3966d9bde6b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059379371s Jan 30 13:12:39.641: INFO: Pod "pod-e96c990e-64ab-4fd1-9508-3966d9bde6b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072037607s STEP: Saw pod success Jan 30 13:12:39.641: INFO: Pod "pod-e96c990e-64ab-4fd1-9508-3966d9bde6b2" satisfied condition "success or failure" Jan 30 13:12:39.644: INFO: Trying to get logs from node iruya-node pod pod-e96c990e-64ab-4fd1-9508-3966d9bde6b2 container test-container: STEP: delete the pod Jan 30 13:12:39.731: INFO: Waiting for pod pod-e96c990e-64ab-4fd1-9508-3966d9bde6b2 to disappear Jan 30 13:12:39.749: INFO: Pod pod-e96c990e-64ab-4fd1-9508-3966d9bde6b2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:12:39.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5537" for this suite. Jan 30 13:12:45.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:12:46.064: INFO: namespace emptydir-5537 deletion completed in 6.293894111s • [SLOW TEST:16.653 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:12:46.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Jan 30 13:12:46.181: INFO: Waiting up to 5m0s for pod "client-containers-c1703c35-d8b3-41c6-a78a-7044dd031bdc" in namespace "containers-2102" to be "success or failure" Jan 30 13:12:46.208: INFO: Pod "client-containers-c1703c35-d8b3-41c6-a78a-7044dd031bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.551351ms Jan 30 13:12:48.224: INFO: Pod "client-containers-c1703c35-d8b3-41c6-a78a-7044dd031bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04312382s Jan 30 13:12:50.239: INFO: Pod "client-containers-c1703c35-d8b3-41c6-a78a-7044dd031bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057764591s Jan 30 13:12:52.245: INFO: Pod "client-containers-c1703c35-d8b3-41c6-a78a-7044dd031bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063679321s Jan 30 13:12:54.256: INFO: Pod "client-containers-c1703c35-d8b3-41c6-a78a-7044dd031bdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075120915s STEP: Saw pod success Jan 30 13:12:54.257: INFO: Pod "client-containers-c1703c35-d8b3-41c6-a78a-7044dd031bdc" satisfied condition "success or failure" Jan 30 13:12:54.262: INFO: Trying to get logs from node iruya-node pod client-containers-c1703c35-d8b3-41c6-a78a-7044dd031bdc container test-container: STEP: delete the pod Jan 30 13:12:54.333: INFO: Waiting for pod client-containers-c1703c35-d8b3-41c6-a78a-7044dd031bdc to disappear Jan 30 13:12:54.340: INFO: Pod client-containers-c1703c35-d8b3-41c6-a78a-7044dd031bdc no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:12:54.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2102" for this suite. Jan 30 13:13:00.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:13:00.531: INFO: namespace containers-2102 deletion completed in 6.185090573s • [SLOW TEST:14.466 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:13:00.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 30 13:13:00.677: INFO: Waiting up to 5m0s for pod "downwardapi-volume-15f078b0-474b-465b-94d0-68055ff87c89" in namespace "downward-api-2614" to be "success or failure" Jan 30 13:13:00.704: INFO: Pod "downwardapi-volume-15f078b0-474b-465b-94d0-68055ff87c89": Phase="Pending", Reason="", readiness=false. Elapsed: 26.616843ms Jan 30 13:13:02.715: INFO: Pod "downwardapi-volume-15f078b0-474b-465b-94d0-68055ff87c89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037705438s Jan 30 13:13:04.721: INFO: Pod "downwardapi-volume-15f078b0-474b-465b-94d0-68055ff87c89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04368064s Jan 30 13:13:06.731: INFO: Pod "downwardapi-volume-15f078b0-474b-465b-94d0-68055ff87c89": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053672285s Jan 30 13:13:08.748: INFO: Pod "downwardapi-volume-15f078b0-474b-465b-94d0-68055ff87c89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071011839s STEP: Saw pod success Jan 30 13:13:08.749: INFO: Pod "downwardapi-volume-15f078b0-474b-465b-94d0-68055ff87c89" satisfied condition "success or failure" Jan 30 13:13:08.769: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-15f078b0-474b-465b-94d0-68055ff87c89 container client-container: STEP: delete the pod Jan 30 13:13:09.027: INFO: Waiting for pod downwardapi-volume-15f078b0-474b-465b-94d0-68055ff87c89 to disappear Jan 30 13:13:09.033: INFO: Pod downwardapi-volume-15f078b0-474b-465b-94d0-68055ff87c89 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:13:09.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2614" for this suite. Jan 30 13:13:15.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:13:15.228: INFO: namespace downward-api-2614 deletion completed in 6.189899575s • [SLOW TEST:14.697 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:13:15.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jan 30 13:13:15.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-911' Jan 30 13:13:15.726: INFO: stderr: "" Jan 30 13:13:15.726: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 30 13:13:16.740: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:13:16.740: INFO: Found 0 / 1 Jan 30 13:13:17.734: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:13:17.735: INFO: Found 0 / 1 Jan 30 13:13:18.739: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:13:18.739: INFO: Found 0 / 1 Jan 30 13:13:19.765: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:13:19.766: INFO: Found 0 / 1 Jan 30 13:13:20.735: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:13:20.735: INFO: Found 0 / 1 Jan 30 13:13:21.751: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:13:21.752: INFO: Found 0 / 1 Jan 30 13:13:22.766: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:13:22.766: INFO: Found 0 / 1 Jan 30 13:13:23.739: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:13:23.740: INFO: Found 1 / 1 Jan 30 13:13:23.740: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 30 13:13:23.745: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:13:23.746: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 30 13:13:23.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-f2qbq --namespace=kubectl-911 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 30 13:13:23.987: INFO: stderr: "" Jan 30 13:13:23.987: INFO: stdout: "pod/redis-master-f2qbq patched\n" STEP: checking annotations Jan 30 13:13:23.997: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:13:23.997: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:13:23.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-911" for this suite. Jan 30 13:13:46.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:13:46.227: INFO: namespace kubectl-911 deletion completed in 22.224177822s • [SLOW TEST:30.999 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:13:46.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3794/configmap-test-bcef12b6-a1cd-4845-b640-3e9e60271b1b STEP: Creating a pod to test consume configMaps Jan 30 13:13:46.356: INFO: Waiting up to 5m0s for pod "pod-configmaps-739461e4-b728-4a6e-93db-9ceab703e604" in namespace "configmap-3794" to be "success or failure" Jan 30 13:13:46.360: INFO: Pod "pod-configmaps-739461e4-b728-4a6e-93db-9ceab703e604": Phase="Pending", Reason="", readiness=false. Elapsed: 3.061093ms Jan 30 13:13:48.369: INFO: Pod "pod-configmaps-739461e4-b728-4a6e-93db-9ceab703e604": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012566376s Jan 30 13:13:50.378: INFO: Pod "pod-configmaps-739461e4-b728-4a6e-93db-9ceab703e604": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021789724s Jan 30 13:13:52.387: INFO: Pod "pod-configmaps-739461e4-b728-4a6e-93db-9ceab703e604": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030670187s Jan 30 13:13:54.402: INFO: Pod "pod-configmaps-739461e4-b728-4a6e-93db-9ceab703e604": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045386518s STEP: Saw pod success Jan 30 13:13:54.402: INFO: Pod "pod-configmaps-739461e4-b728-4a6e-93db-9ceab703e604" satisfied condition "success or failure" Jan 30 13:13:54.406: INFO: Trying to get logs from node iruya-node pod pod-configmaps-739461e4-b728-4a6e-93db-9ceab703e604 container env-test: STEP: delete the pod Jan 30 13:13:54.483: INFO: Waiting for pod pod-configmaps-739461e4-b728-4a6e-93db-9ceab703e604 to disappear Jan 30 13:13:54.488: INFO: Pod pod-configmaps-739461e4-b728-4a6e-93db-9ceab703e604 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:13:54.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3794" for this suite. Jan 30 13:14:00.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:14:00.656: INFO: namespace configmap-3794 deletion completed in 6.163211878s • [SLOW TEST:14.428 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:14:00.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-ae535885-9289-4410-89ce-e8587a218af3 in namespace container-probe-276 Jan 30 13:14:10.816: INFO: Started pod liveness-ae535885-9289-4410-89ce-e8587a218af3 in namespace container-probe-276 STEP: checking the pod's current state and verifying that restartCount is present Jan 30 13:14:10.830: INFO: Initial restart count of pod liveness-ae535885-9289-4410-89ce-e8587a218af3 is 0 Jan 30 13:14:30.954: INFO: Restart count of pod container-probe-276/liveness-ae535885-9289-4410-89ce-e8587a218af3 is now 1 (20.123706916s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:14:30.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-276" for this suite. Jan 30 13:14:37.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:14:37.174: INFO: namespace container-probe-276 deletion completed in 6.166139524s • [SLOW TEST:36.517 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:14:37.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Jan 30 13:14:37.236: INFO: Waiting up to 5m0s for pod "var-expansion-818e62cd-8325-4de1-a8b9-14db300d73ec" in namespace "var-expansion-5913" to be "success or failure" Jan 30 13:14:37.248: INFO: Pod "var-expansion-818e62cd-8325-4de1-a8b9-14db300d73ec": Phase="Pending", Reason="", readiness=false. Elapsed: 12.165431ms Jan 30 13:14:39.259: INFO: Pod "var-expansion-818e62cd-8325-4de1-a8b9-14db300d73ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022628881s Jan 30 13:14:41.276: INFO: Pod "var-expansion-818e62cd-8325-4de1-a8b9-14db300d73ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039579399s Jan 30 13:14:43.283: INFO: Pod "var-expansion-818e62cd-8325-4de1-a8b9-14db300d73ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04646362s Jan 30 13:14:45.293: INFO: Pod "var-expansion-818e62cd-8325-4de1-a8b9-14db300d73ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056615663s Jan 30 13:14:47.302: INFO: Pod "var-expansion-818e62cd-8325-4de1-a8b9-14db300d73ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066281116s STEP: Saw pod success Jan 30 13:14:47.303: INFO: Pod "var-expansion-818e62cd-8325-4de1-a8b9-14db300d73ec" satisfied condition "success or failure" Jan 30 13:14:47.307: INFO: Trying to get logs from node iruya-node pod var-expansion-818e62cd-8325-4de1-a8b9-14db300d73ec container dapi-container: STEP: delete the pod Jan 30 13:14:47.462: INFO: Waiting for pod var-expansion-818e62cd-8325-4de1-a8b9-14db300d73ec to disappear Jan 30 13:14:47.477: INFO: Pod var-expansion-818e62cd-8325-4de1-a8b9-14db300d73ec no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:14:47.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5913" for this suite. Jan 30 13:14:53.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:14:53.682: INFO: namespace var-expansion-5913 deletion completed in 6.192236943s • [SLOW TEST:16.507 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:14:53.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jan 30 13:14:53.824: INFO: Waiting up to 5m0s for pod "var-expansion-77d4e91b-26a5-4da2-add3-3f75af3767c1" in namespace "var-expansion-7017" to be "success or failure" Jan 30 13:14:53.831: INFO: Pod "var-expansion-77d4e91b-26a5-4da2-add3-3f75af3767c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.736488ms Jan 30 13:14:55.839: INFO: Pod "var-expansion-77d4e91b-26a5-4da2-add3-3f75af3767c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0146819s Jan 30 13:14:57.851: INFO: Pod "var-expansion-77d4e91b-26a5-4da2-add3-3f75af3767c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027312073s Jan 30 13:14:59.879: INFO: Pod "var-expansion-77d4e91b-26a5-4da2-add3-3f75af3767c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055354489s Jan 30 13:15:01.893: INFO: Pod "var-expansion-77d4e91b-26a5-4da2-add3-3f75af3767c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069644428s STEP: Saw pod success Jan 30 13:15:01.894: INFO: Pod "var-expansion-77d4e91b-26a5-4da2-add3-3f75af3767c1" satisfied condition "success or failure" Jan 30 13:15:01.899: INFO: Trying to get logs from node iruya-node pod var-expansion-77d4e91b-26a5-4da2-add3-3f75af3767c1 container dapi-container: STEP: delete the pod Jan 30 13:15:01.973: INFO: Waiting for pod var-expansion-77d4e91b-26a5-4da2-add3-3f75af3767c1 to disappear Jan 30 13:15:01.987: INFO: Pod var-expansion-77d4e91b-26a5-4da2-add3-3f75af3767c1 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:15:01.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7017" for this suite. Jan 30 13:15:08.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:15:08.298: INFO: namespace var-expansion-7017 deletion completed in 6.278232333s • [SLOW TEST:14.615 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:15:08.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 30 13:15:16.774: INFO: Waiting up to 5m0s for pod "client-envvars-94b56a53-97cb-4a22-b7d0-8b70377daa40" in namespace "pods-1386" to be "success or failure" Jan 30 13:15:16.790: INFO: Pod "client-envvars-94b56a53-97cb-4a22-b7d0-8b70377daa40": Phase="Pending", Reason="", readiness=false. Elapsed: 15.100243ms Jan 30 13:15:18.800: INFO: Pod "client-envvars-94b56a53-97cb-4a22-b7d0-8b70377daa40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02540035s Jan 30 13:15:20.819: INFO: Pod "client-envvars-94b56a53-97cb-4a22-b7d0-8b70377daa40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044545433s Jan 30 13:15:22.845: INFO: Pod "client-envvars-94b56a53-97cb-4a22-b7d0-8b70377daa40": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070153827s Jan 30 13:15:24.862: INFO: Pod "client-envvars-94b56a53-97cb-4a22-b7d0-8b70377daa40": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08715086s Jan 30 13:15:26.875: INFO: Pod "client-envvars-94b56a53-97cb-4a22-b7d0-8b70377daa40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100410697s STEP: Saw pod success Jan 30 13:15:26.875: INFO: Pod "client-envvars-94b56a53-97cb-4a22-b7d0-8b70377daa40" satisfied condition "success or failure" Jan 30 13:15:26.882: INFO: Trying to get logs from node iruya-node pod client-envvars-94b56a53-97cb-4a22-b7d0-8b70377daa40 container env3cont: STEP: delete the pod Jan 30 13:15:27.255: INFO: Waiting for pod client-envvars-94b56a53-97cb-4a22-b7d0-8b70377daa40 to disappear Jan 30 13:15:27.268: INFO: Pod client-envvars-94b56a53-97cb-4a22-b7d0-8b70377daa40 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:15:27.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1386" for this suite. Jan 30 13:16:09.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:16:09.481: INFO: namespace pods-1386 deletion completed in 42.200439187s • [SLOW TEST:61.182 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:16:09.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 30 13:16:09.566: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 30 13:16:09.621: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 30 13:16:14.629: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 30 13:16:18.643: INFO: Creating deployment "test-rolling-update-deployment" Jan 30 13:16:18.655: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 30 13:16:18.668: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 30 13:16:20.769: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 30 13:16:20.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 13:16:22.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 13:16:24.781: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 13:16:26.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715986978, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 13:16:28.794: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 30 13:16:28.813: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-2768,SelfLink:/apis/apps/v1/namespaces/deployment-2768/deployments/test-rolling-update-deployment,UID:aca415a5-f12e-454c-ad55-a2ab5033137a,ResourceVersion:22438104,Generation:1,CreationTimestamp:2020-01-30 13:16:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-30 13:16:18 +0000 UTC 2020-01-30 13:16:18 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-30 13:16:26 +0000 UTC 2020-01-30 13:16:18 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 30 13:16:28.815: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-2768,SelfLink:/apis/apps/v1/namespaces/deployment-2768/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:cdbb1668-f6bd-46d4-9f08-6908b911e39a,ResourceVersion:22438093,Generation:1,CreationTimestamp:2020-01-30 13:16:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment aca415a5-f12e-454c-ad55-a2ab5033137a 0xc0025eceb7 0xc0025eceb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 30 13:16:28.815: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 30 13:16:28.816: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-2768,SelfLink:/apis/apps/v1/namespaces/deployment-2768/replicasets/test-rolling-update-controller,UID:6a3fd94a-39f0-44d3-b43e-5a2e7107b617,ResourceVersion:22438102,Generation:2,CreationTimestamp:2020-01-30 13:16:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment aca415a5-f12e-454c-ad55-a2ab5033137a 0xc0025ecdcf 0xc0025ecde0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 30 13:16:28.819: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-nfptj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-nfptj,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-2768,SelfLink:/api/v1/namespaces/deployment-2768/pods/test-rolling-update-deployment-79f6b9d75c-nfptj,UID:80b39bde-a356-4d40-92f8-03a115f290d2,ResourceVersion:22438092,Generation:0,CreationTimestamp:2020-01-30 13:16:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c cdbb1668-f6bd-46d4-9f08-6908b911e39a 0xc0025ed7a7 0xc0025ed7a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-scwcw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-scwcw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-scwcw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ed820} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ed840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:16:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:16:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:16:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:16:18 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-30 13:16:18 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-30 13:16:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d79c2f722ccf2aaaa7fb06a37e776633e7afe74c1bef40112fab5e95006dc236}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:16:28.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2768" for this suite. Jan 30 13:16:34.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:16:34.948: INFO: namespace deployment-2768 deletion completed in 6.124136793s • [SLOW TEST:25.467 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:16:34.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 30 13:16:35.115: INFO: Waiting up to 5m0s for pod "pod-4a2a7080-0f02-4f62-9c16-0dcd53864ed0" in namespace "emptydir-2612" to be "success or failure" Jan 30 13:16:35.199: INFO: Pod "pod-4a2a7080-0f02-4f62-9c16-0dcd53864ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 83.740336ms Jan 30 13:16:37.246: INFO: Pod "pod-4a2a7080-0f02-4f62-9c16-0dcd53864ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130891387s Jan 30 13:16:39.254: INFO: Pod "pod-4a2a7080-0f02-4f62-9c16-0dcd53864ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139043699s Jan 30 13:16:41.302: INFO: Pod "pod-4a2a7080-0f02-4f62-9c16-0dcd53864ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.186982006s Jan 30 13:16:43.312: INFO: Pod "pod-4a2a7080-0f02-4f62-9c16-0dcd53864ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.196557472s Jan 30 13:16:45.319: INFO: Pod "pod-4a2a7080-0f02-4f62-9c16-0dcd53864ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.203961658s Jan 30 13:16:47.379: INFO: Pod "pod-4a2a7080-0f02-4f62-9c16-0dcd53864ed0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.264232114s STEP: Saw pod success Jan 30 13:16:47.380: INFO: Pod "pod-4a2a7080-0f02-4f62-9c16-0dcd53864ed0" satisfied condition "success or failure" Jan 30 13:16:47.393: INFO: Trying to get logs from node iruya-node pod pod-4a2a7080-0f02-4f62-9c16-0dcd53864ed0 container test-container: STEP: delete the pod Jan 30 13:16:47.446: INFO: Waiting for pod pod-4a2a7080-0f02-4f62-9c16-0dcd53864ed0 to disappear Jan 30 13:16:47.460: INFO: Pod pod-4a2a7080-0f02-4f62-9c16-0dcd53864ed0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:16:47.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2612" for this suite. Jan 30 13:16:53.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:16:53.733: INFO: namespace emptydir-2612 deletion completed in 6.266200248s • [SLOW TEST:18.784 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:16:53.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-1ee55324-2c80-4bf4-b59c-9cbdbd00d68e STEP: Creating a pod to test consume secrets Jan 30 13:16:54.024: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4ff13fe9-3d80-482e-ac0a-456a55beaeef" in namespace "projected-46" to be "success or failure" Jan 30 13:16:54.032: INFO: Pod "pod-projected-secrets-4ff13fe9-3d80-482e-ac0a-456a55beaeef": Phase="Pending", Reason="", readiness=false. Elapsed: 7.99537ms Jan 30 13:16:56.044: INFO: Pod "pod-projected-secrets-4ff13fe9-3d80-482e-ac0a-456a55beaeef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019530483s Jan 30 13:16:58.051: INFO: Pod "pod-projected-secrets-4ff13fe9-3d80-482e-ac0a-456a55beaeef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026318081s Jan 30 13:17:00.063: INFO: Pod "pod-projected-secrets-4ff13fe9-3d80-482e-ac0a-456a55beaeef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038355206s Jan 30 13:17:02.075: INFO: Pod "pod-projected-secrets-4ff13fe9-3d80-482e-ac0a-456a55beaeef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050822128s Jan 30 13:17:04.087: INFO: Pod "pod-projected-secrets-4ff13fe9-3d80-482e-ac0a-456a55beaeef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062977224s STEP: Saw pod success Jan 30 13:17:04.088: INFO: Pod "pod-projected-secrets-4ff13fe9-3d80-482e-ac0a-456a55beaeef" satisfied condition "success or failure" Jan 30 13:17:04.093: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-4ff13fe9-3d80-482e-ac0a-456a55beaeef container projected-secret-volume-test: STEP: delete the pod Jan 30 13:17:04.639: INFO: Waiting for pod pod-projected-secrets-4ff13fe9-3d80-482e-ac0a-456a55beaeef to disappear Jan 30 13:17:04.656: INFO: Pod pod-projected-secrets-4ff13fe9-3d80-482e-ac0a-456a55beaeef no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:17:04.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-46" for this suite. Jan 30 13:17:10.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:17:10.805: INFO: namespace projected-46 deletion completed in 6.139004042s • [SLOW TEST:17.072 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:17:10.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-d052734d-0c15-49e8-a211-5c7d7eb4aa9b STEP: Creating a pod to test consume configMaps Jan 30 13:17:10.972: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7734d29e-694d-4f53-85ae-62c053c7abe8" in namespace "projected-3195" to be "success or failure" Jan 30 13:17:10.975: INFO: Pod "pod-projected-configmaps-7734d29e-694d-4f53-85ae-62c053c7abe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.691762ms Jan 30 13:17:12.980: INFO: Pod "pod-projected-configmaps-7734d29e-694d-4f53-85ae-62c053c7abe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007947248s Jan 30 13:17:14.995: INFO: Pod "pod-projected-configmaps-7734d29e-694d-4f53-85ae-62c053c7abe8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022545151s Jan 30 13:17:17.010: INFO: Pod "pod-projected-configmaps-7734d29e-694d-4f53-85ae-62c053c7abe8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037905381s Jan 30 13:17:19.022: INFO: Pod "pod-projected-configmaps-7734d29e-694d-4f53-85ae-62c053c7abe8": Phase="Running", Reason="", readiness=true. Elapsed: 8.049800801s Jan 30 13:17:21.040: INFO: Pod "pod-projected-configmaps-7734d29e-694d-4f53-85ae-62c053c7abe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068102277s STEP: Saw pod success Jan 30 13:17:21.041: INFO: Pod "pod-projected-configmaps-7734d29e-694d-4f53-85ae-62c053c7abe8" satisfied condition "success or failure" Jan 30 13:17:21.070: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-7734d29e-694d-4f53-85ae-62c053c7abe8 container projected-configmap-volume-test: STEP: delete the pod Jan 30 13:17:21.127: INFO: Waiting for pod pod-projected-configmaps-7734d29e-694d-4f53-85ae-62c053c7abe8 to disappear Jan 30 13:17:21.133: INFO: Pod pod-projected-configmaps-7734d29e-694d-4f53-85ae-62c053c7abe8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:17:21.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3195" for this suite. Jan 30 13:17:27.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:17:27.278: INFO: namespace projected-3195 deletion completed in 6.139479107s • [SLOW TEST:16.472 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:17:27.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jan 30 13:17:35.449: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 30 13:17:45.611: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:17:45.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9568" for this suite. Jan 30 13:17:51.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:17:51.805: INFO: namespace pods-9568 deletion completed in 6.178271962s • [SLOW TEST:24.527 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:17:51.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 30 13:17:51.943: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81b14ab8-6615-45c1-8db5-546fe645262a" in namespace "projected-9842" to be "success or failure" Jan 30 13:17:51.957: INFO: Pod "downwardapi-volume-81b14ab8-6615-45c1-8db5-546fe645262a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.410762ms Jan 30 13:17:53.970: INFO: Pod "downwardapi-volume-81b14ab8-6615-45c1-8db5-546fe645262a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026627671s Jan 30 13:17:55.979: INFO: Pod "downwardapi-volume-81b14ab8-6615-45c1-8db5-546fe645262a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035409761s Jan 30 13:17:57.987: INFO: Pod "downwardapi-volume-81b14ab8-6615-45c1-8db5-546fe645262a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04322886s Jan 30 13:17:59.998: INFO: Pod "downwardapi-volume-81b14ab8-6615-45c1-8db5-546fe645262a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054355066s STEP: Saw pod success Jan 30 13:17:59.998: INFO: Pod "downwardapi-volume-81b14ab8-6615-45c1-8db5-546fe645262a" satisfied condition "success or failure" Jan 30 13:18:00.006: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-81b14ab8-6615-45c1-8db5-546fe645262a container client-container: STEP: delete the pod Jan 30 13:18:00.081: INFO: Waiting for pod downwardapi-volume-81b14ab8-6615-45c1-8db5-546fe645262a to disappear Jan 30 13:18:00.088: INFO: Pod downwardapi-volume-81b14ab8-6615-45c1-8db5-546fe645262a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:18:00.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9842" for this suite. Jan 30 13:18:06.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:18:06.330: INFO: namespace projected-9842 deletion completed in 6.234658415s • [SLOW TEST:14.525 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:18:06.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-2173 I0130 13:18:06.443495 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2173, replica count: 1 I0130 13:18:07.495261 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 13:18:08.496084 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 13:18:09.496777 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 13:18:10.498254 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 13:18:11.499790 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 13:18:12.500717 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 13:18:13.501530 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 13:18:14.502403 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 13:18:15.503359 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 13:18:16.504135 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 30 13:18:16.763: INFO: Created: latency-svc-7wxwk Jan 30 13:18:16.779: INFO: Got endpoints: latency-svc-7wxwk [173.390701ms] Jan 30 13:18:16.874: INFO: Created: latency-svc-v4pj4 Jan 30 13:18:16.960: INFO: Got endpoints: latency-svc-v4pj4 [179.834834ms] Jan 30 13:18:17.024: INFO: Created: latency-svc-bmzmv Jan 30 13:18:17.044: INFO: Got endpoints: latency-svc-bmzmv [265.29827ms] Jan 30 13:18:17.171: INFO: Created: latency-svc-9xxqr Jan 30 13:18:17.191: INFO: Got endpoints: latency-svc-9xxqr [410.914471ms] Jan 30 13:18:17.223: INFO: Created: latency-svc-d278q Jan 30 13:18:17.238: INFO: Got endpoints: latency-svc-d278q [458.820441ms] Jan 30 13:18:17.272: INFO: Created: latency-svc-hj97h Jan 30 13:18:17.358: INFO: Created: latency-svc-mrd79 Jan 30 13:18:17.370: INFO: Got endpoints: latency-svc-hj97h [590.351852ms] Jan 30 13:18:17.375: INFO: Got endpoints: latency-svc-mrd79 [595.740045ms] Jan 30 13:18:17.410: INFO: Created: latency-svc-qnswt Jan 30 13:18:17.410: INFO: Got endpoints: latency-svc-qnswt [630.913361ms] Jan 30 13:18:17.528: INFO: Created: latency-svc-fdxkr Jan 30 13:18:17.559: INFO: Got endpoints: latency-svc-fdxkr [778.6639ms] Jan 30 13:18:17.565: INFO: Created: latency-svc-txtbs Jan 30 13:18:17.573: INFO: Got endpoints: latency-svc-txtbs [793.749281ms] Jan 30 13:18:17.627: INFO: Created: latency-svc-ckrjd Jan 30 13:18:17.767: INFO: Got endpoints: latency-svc-ckrjd [986.732849ms] Jan 30 13:18:17.767: INFO: Created: latency-svc-6snw6 Jan 30 13:18:17.772: INFO: Got endpoints: latency-svc-6snw6 [991.561317ms] Jan 30 13:18:17.825: INFO: Created: latency-svc-ftqpd Jan 30 13:18:17.825: INFO: Got endpoints: latency-svc-ftqpd [1.045855935s] Jan 30 13:18:17.928: INFO: Created: latency-svc-5nk4l Jan 30 13:18:17.929: INFO: Got endpoints: latency-svc-5nk4l [1.148688783s] Jan 30 13:18:17.968: INFO: Created: latency-svc-wlp95 Jan 30 13:18:17.977: INFO: Got endpoints: latency-svc-wlp95 [1.196614188s] Jan 30 13:18:18.102: INFO: Created: latency-svc-b74tz Jan 30 13:18:18.110: INFO: Got endpoints: latency-svc-b74tz [1.330060847s] Jan 30 13:18:18.168: INFO: Created: latency-svc-fjfb6 Jan 30 13:18:18.179: INFO: Got endpoints: latency-svc-fjfb6 [1.218895069s] Jan 30 13:18:18.300: INFO: Created: latency-svc-jmtc2 Jan 30 13:18:18.319: INFO: Got endpoints: latency-svc-jmtc2 [1.274068257s] Jan 30 13:18:18.409: INFO: Created: latency-svc-wztfv Jan 30 13:18:18.499: INFO: Got endpoints: latency-svc-wztfv [1.307825293s] Jan 30 13:18:18.553: INFO: Created: latency-svc-c2jb9 Jan 30 13:18:18.573: INFO: Got endpoints: latency-svc-c2jb9 [1.333947598s] Jan 30 13:18:18.679: INFO: Created: latency-svc-xhhgx Jan 30 13:18:18.687: INFO: Got endpoints: latency-svc-xhhgx [1.316711474s] Jan 30 13:18:18.751: INFO: Created: latency-svc-wz9w5 Jan 30 13:18:18.887: INFO: Got endpoints: latency-svc-wz9w5 [1.511501254s] Jan 30 13:18:18.920: INFO: Created: latency-svc-6d7d7 Jan 30 13:18:18.925: INFO: Got endpoints: latency-svc-6d7d7 [1.514445373s] Jan 30 13:18:19.123: INFO: Created: latency-svc-jkb54 Jan 30 13:18:19.141: INFO: Got endpoints: latency-svc-jkb54 [1.5808364s] Jan 30 13:18:19.182: INFO: Created: latency-svc-pjgqc Jan 30 13:18:19.190: INFO: Got endpoints: latency-svc-pjgqc [1.617434864s] Jan 30 13:18:19.306: INFO: Created: latency-svc-crsfm Jan 30 13:18:19.318: INFO: Got endpoints: latency-svc-crsfm [1.550560924s] Jan 30 13:18:19.373: INFO: Created: latency-svc-kx6vv Jan 30 13:18:19.373: INFO: Got endpoints: latency-svc-kx6vv [1.601623707s] Jan 30 13:18:19.473: INFO: Created: latency-svc-95trk Jan 30 13:18:19.504: INFO: Got endpoints: latency-svc-95trk [1.678869115s] Jan 30 13:18:19.511: INFO: Created: latency-svc-j8z6s Jan 30 13:18:19.514: INFO: Got endpoints: latency-svc-j8z6s [1.585018851s] Jan 30 13:18:19.545: INFO: Created: latency-svc-zcrw5 Jan 30 13:18:19.554: INFO: Got endpoints: latency-svc-zcrw5 [1.576855588s] Jan 30 13:18:19.663: INFO: Created: latency-svc-575kb Jan 30 13:18:19.700: INFO: Got endpoints: latency-svc-575kb [1.589620622s] Jan 30 13:18:19.704: INFO: Created: latency-svc-2dxhp Jan 30 13:18:19.720: INFO: Got endpoints: latency-svc-2dxhp [1.540144424s] Jan 30 13:18:19.831: INFO: Created: latency-svc-8f2rt Jan 30 13:18:19.833: INFO: Got endpoints: latency-svc-8f2rt [1.513966757s] Jan 30 13:18:19.978: INFO: Created: latency-svc-ppwws Jan 30 13:18:19.983: INFO: Got endpoints: latency-svc-ppwws [1.483119805s] Jan 30 13:18:20.042: INFO: Created: latency-svc-qpslv Jan 30 13:18:20.045: INFO: Got endpoints: latency-svc-qpslv [1.471946448s] Jan 30 13:18:20.158: INFO: Created: latency-svc-9zgch Jan 30 13:18:20.161: INFO: Got endpoints: latency-svc-9zgch [1.473571019s] Jan 30 13:18:20.186: INFO: Created: latency-svc-d6fvc Jan 30 13:18:20.197: INFO: Got endpoints: latency-svc-d6fvc [1.309789009s] Jan 30 13:18:20.226: INFO: Created: latency-svc-tqklb Jan 30 13:18:20.308: INFO: Got endpoints: latency-svc-tqklb [1.383519525s] Jan 30 13:18:20.347: INFO: Created: latency-svc-88xsc Jan 30 13:18:20.358: INFO: Got endpoints: latency-svc-88xsc [1.216637648s] Jan 30 13:18:20.529: INFO: Created: latency-svc-m8j2s Jan 30 13:18:20.552: INFO: Got endpoints: latency-svc-m8j2s [1.360973196s] Jan 30 13:18:20.603: INFO: Created: latency-svc-pftp5 Jan 30 13:18:20.614: INFO: Got endpoints: latency-svc-pftp5 [1.2961629s] Jan 30 13:18:20.713: INFO: Created: latency-svc-2nzb7 Jan 30 13:18:20.732: INFO: Got endpoints: latency-svc-2nzb7 [1.358312321s] Jan 30 13:18:20.781: INFO: Created: latency-svc-l7w6x Jan 30 13:18:20.880: INFO: Created: latency-svc-k8pmd Jan 30 13:18:20.880: INFO: Got endpoints: latency-svc-l7w6x [1.376004163s] Jan 30 13:18:20.932: INFO: Got endpoints: latency-svc-k8pmd [1.417384857s] Jan 30 13:18:20.935: INFO: Created: latency-svc-m85fx Jan 30 13:18:20.951: INFO: Got endpoints: latency-svc-m85fx [1.39591749s] Jan 30 13:18:21.048: INFO: Created: latency-svc-69zk7 Jan 30 13:18:21.096: INFO: Got endpoints: latency-svc-69zk7 [1.394965151s] Jan 30 13:18:21.103: INFO: Created: latency-svc-zchdb Jan 30 13:18:21.110: INFO: Got endpoints: latency-svc-zchdb [1.390330562s] Jan 30 13:18:21.144: INFO: Created: latency-svc-8sxx2 Jan 30 13:18:21.242: INFO: Got endpoints: latency-svc-8sxx2 [1.408819231s] Jan 30 13:18:21.278: INFO: Created: latency-svc-hzs7c Jan 30 13:18:21.297: INFO: Got endpoints: latency-svc-hzs7c [1.31353536s] Jan 30 13:18:21.456: INFO: Created: latency-svc-94sp4 Jan 30 13:18:21.459: INFO: Got endpoints: latency-svc-94sp4 [215.87923ms] Jan 30 13:18:21.493: INFO: Created: latency-svc-8lvz6 Jan 30 13:18:21.511: INFO: Got endpoints: latency-svc-8lvz6 [1.465007393s] Jan 30 13:18:21.633: INFO: Created: latency-svc-24v2s Jan 30 13:18:21.633: INFO: Got endpoints: latency-svc-24v2s [1.471620738s] Jan 30 13:18:21.694: INFO: Created: latency-svc-fvssm Jan 30 13:18:21.705: INFO: Got endpoints: latency-svc-fvssm [1.507539526s] Jan 30 13:18:21.903: INFO: Created: latency-svc-nw9wq Jan 30 13:18:21.925: INFO: Got endpoints: latency-svc-nw9wq [1.616038189s] Jan 30 13:18:22.097: INFO: Created: latency-svc-mnftf Jan 30 13:18:22.107: INFO: Got endpoints: latency-svc-mnftf [1.748676841s] Jan 30 13:18:22.292: INFO: Created: latency-svc-qsq4j Jan 30 13:18:22.307: INFO: Got endpoints: latency-svc-qsq4j [1.755209987s] Jan 30 13:18:22.501: INFO: Created: latency-svc-8ngvr Jan 30 13:18:22.528: INFO: Got endpoints: latency-svc-8ngvr [1.912332975s] Jan 30 13:18:22.700: INFO: Created: latency-svc-s55n8 Jan 30 13:18:22.708: INFO: Got endpoints: latency-svc-s55n8 [1.97572275s] Jan 30 13:18:22.763: INFO: Created: latency-svc-fd8hw Jan 30 13:18:22.785: INFO: Got endpoints: latency-svc-fd8hw [1.904661914s] Jan 30 13:18:22.873: INFO: Created: latency-svc-ztddp Jan 30 13:18:22.884: INFO: Got endpoints: latency-svc-ztddp [1.952471818s] Jan 30 13:18:22.922: INFO: Created: latency-svc-sflk4 Jan 30 13:18:22.930: INFO: Got endpoints: latency-svc-sflk4 [1.979533819s] Jan 30 13:18:22.955: INFO: Created: latency-svc-6q528 Jan 30 13:18:23.093: INFO: Got endpoints: latency-svc-6q528 [1.996149687s] Jan 30 13:18:23.133: INFO: Created: latency-svc-bzlx4 Jan 30 13:18:23.158: INFO: Got endpoints: latency-svc-bzlx4 [2.047682797s] Jan 30 13:18:23.178: INFO: Created: latency-svc-whgln Jan 30 13:18:23.297: INFO: Got endpoints: latency-svc-whgln [2.000327484s] Jan 30 13:18:23.304: INFO: Created: latency-svc-9jf8g Jan 30 13:18:23.315: INFO: Got endpoints: latency-svc-9jf8g [1.856362694s] Jan 30 13:18:23.501: INFO: Created: latency-svc-2jsn8 Jan 30 13:18:23.513: INFO: Got endpoints: latency-svc-2jsn8 [2.001498068s] Jan 30 13:18:23.646: INFO: Created: latency-svc-r2bf2 Jan 30 13:18:23.652: INFO: Got endpoints: latency-svc-r2bf2 [2.019654365s] Jan 30 13:18:23.702: INFO: Created: latency-svc-8wq27 Jan 30 13:18:23.723: INFO: Got endpoints: latency-svc-8wq27 [2.017351116s] Jan 30 13:18:23.810: INFO: Created: latency-svc-87xnh Jan 30 13:18:23.828: INFO: Got endpoints: latency-svc-87xnh [1.90224468s] Jan 30 13:18:23.881: INFO: Created: latency-svc-4bn2g Jan 30 13:18:23.893: INFO: Got endpoints: latency-svc-4bn2g [1.785829574s] Jan 30 13:18:24.039: INFO: Created: latency-svc-hn2bh Jan 30 13:18:24.045: INFO: Got endpoints: latency-svc-hn2bh [1.737007338s] Jan 30 13:18:24.080: INFO: Created: latency-svc-6gl85 Jan 30 13:18:24.103: INFO: Got endpoints: latency-svc-6gl85 [1.575217709s] Jan 30 13:18:24.193: INFO: Created: latency-svc-rlg54 Jan 30 13:18:24.201: INFO: Got endpoints: latency-svc-rlg54 [1.491746345s] Jan 30 13:18:24.233: INFO: Created: latency-svc-ggtks Jan 30 13:18:24.249: INFO: Got endpoints: latency-svc-ggtks [1.463838202s] Jan 30 13:18:24.332: INFO: Created: latency-svc-qkdsq Jan 30 13:18:24.340: INFO: Got endpoints: latency-svc-qkdsq [1.455332981s] Jan 30 13:18:24.388: INFO: Created: latency-svc-2tt46 Jan 30 13:18:24.400: INFO: Got endpoints: latency-svc-2tt46 [1.469154504s] Jan 30 13:18:24.565: INFO: Created: latency-svc-4gfqz Jan 30 13:18:24.577: INFO: Got endpoints: latency-svc-4gfqz [1.483528645s] Jan 30 13:18:24.625: INFO: Created: latency-svc-jttl6 Jan 30 13:18:24.647: INFO: Got endpoints: latency-svc-jttl6 [1.488756774s] Jan 30 13:18:24.770: INFO: Created: latency-svc-tt45n Jan 30 13:18:24.782: INFO: Got endpoints: latency-svc-tt45n [1.484635596s] Jan 30 13:18:24.832: INFO: Created: latency-svc-gvskt Jan 30 13:18:24.834: INFO: Got endpoints: latency-svc-gvskt [1.519054842s] Jan 30 13:18:24.863: INFO: Created: latency-svc-krhmr Jan 30 13:18:25.004: INFO: Got endpoints: latency-svc-krhmr [1.490754394s] Jan 30 13:18:25.010: INFO: Created: latency-svc-f69gg Jan 30 13:18:25.020: INFO: Got endpoints: latency-svc-f69gg [1.367737107s] Jan 30 13:18:25.078: INFO: Created: latency-svc-5zsk8 Jan 30 13:18:25.085: INFO: Got endpoints: latency-svc-5zsk8 [1.361928975s] Jan 30 13:18:25.219: INFO: Created: latency-svc-p9mv7 Jan 30 13:18:25.247: INFO: Got endpoints: latency-svc-p9mv7 [1.418885515s] Jan 30 13:18:25.251: INFO: Created: latency-svc-26fwm Jan 30 13:18:25.260: INFO: Got endpoints: latency-svc-26fwm [1.365741885s] Jan 30 13:18:25.436: INFO: Created: latency-svc-v8h2h Jan 30 13:18:25.452: INFO: Created: latency-svc-qkmdv Jan 30 13:18:25.452: INFO: Got endpoints: latency-svc-v8h2h [1.406245576s] Jan 30 13:18:25.460: INFO: Got endpoints: latency-svc-qkmdv [1.356553323s] Jan 30 13:18:25.532: INFO: Created: latency-svc-gtdtq Jan 30 13:18:25.651: INFO: Got endpoints: latency-svc-gtdtq [1.449887918s] Jan 30 13:18:25.707: INFO: Created: latency-svc-f6kn7 Jan 30 13:18:25.710: INFO: Got endpoints: latency-svc-f6kn7 [1.460792379s] Jan 30 13:18:25.840: INFO: Created: latency-svc-8jw76 Jan 30 13:18:25.855: INFO: Got endpoints: latency-svc-8jw76 [1.514950705s] Jan 30 13:18:25.931: INFO: Created: latency-svc-hp95w Jan 30 13:18:26.042: INFO: Got endpoints: latency-svc-hp95w [1.641198414s] Jan 30 13:18:26.075: INFO: Created: latency-svc-fwvq2 Jan 30 13:18:26.075: INFO: Got endpoints: latency-svc-fwvq2 [1.497477448s] Jan 30 13:18:26.106: INFO: Created: latency-svc-r2phq Jan 30 13:18:26.116: INFO: Got endpoints: latency-svc-r2phq [1.468419861s] Jan 30 13:18:26.220: INFO: Created: latency-svc-72dsj Jan 30 13:18:26.224: INFO: Got endpoints: latency-svc-72dsj [1.441643166s] Jan 30 13:18:26.274: INFO: Created: latency-svc-wq5gh Jan 30 13:18:26.286: INFO: Got endpoints: latency-svc-wq5gh [1.451378743s] Jan 30 13:18:26.329: INFO: Created: latency-svc-t2jb9 Jan 30 13:18:26.346: INFO: Got endpoints: latency-svc-t2jb9 [1.341016678s] Jan 30 13:18:26.478: INFO: Created: latency-svc-f54qv Jan 30 13:18:26.491: INFO: Got endpoints: latency-svc-f54qv [1.470024477s] Jan 30 13:18:26.592: INFO: Created: latency-svc-jsmsz Jan 30 13:18:26.670: INFO: Got endpoints: latency-svc-jsmsz [1.584686826s] Jan 30 13:18:26.693: INFO: Created: latency-svc-2z7wg Jan 30 13:18:26.740: INFO: Got endpoints: latency-svc-2z7wg [1.492717099s] Jan 30 13:18:26.746: INFO: Created: latency-svc-pdfwt Jan 30 13:18:26.768: INFO: Got endpoints: latency-svc-pdfwt [1.507306925s] Jan 30 13:18:26.942: INFO: Created: latency-svc-9t5kp Jan 30 13:18:26.944: INFO: Got endpoints: latency-svc-9t5kp [1.492060658s] Jan 30 13:18:26.983: INFO: Created: latency-svc-8zxss Jan 30 13:18:27.002: INFO: Got endpoints: latency-svc-8zxss [1.541337796s] Jan 30 13:18:27.133: INFO: Created: latency-svc-p7fvh Jan 30 13:18:27.166: INFO: Got endpoints: latency-svc-p7fvh [1.514736475s] Jan 30 13:18:27.171: INFO: Created: latency-svc-pv4vd Jan 30 13:18:27.179: INFO: Got endpoints: latency-svc-pv4vd [1.468154801s] Jan 30 13:18:27.324: INFO: Created: latency-svc-s77x5 Jan 30 13:18:27.334: INFO: Got endpoints: latency-svc-s77x5 [1.478502919s] Jan 30 13:18:27.374: INFO: Created: latency-svc-bm4rt Jan 30 13:18:27.379: INFO: Got endpoints: latency-svc-bm4rt [1.337614585s] Jan 30 13:18:27.549: INFO: Created: latency-svc-t4fds Jan 30 13:18:27.549: INFO: Got endpoints: latency-svc-t4fds [1.474301691s] Jan 30 13:18:27.600: INFO: Created: latency-svc-cchmp Jan 30 13:18:27.611: INFO: Got endpoints: latency-svc-cchmp [1.494592456s] Jan 30 13:18:27.735: INFO: Created: latency-svc-8d46t Jan 30 13:18:27.755: INFO: Got endpoints: latency-svc-8d46t [1.530332844s] Jan 30 13:18:27.791: INFO: Created: latency-svc-mlwvc Jan 30 13:18:27.798: INFO: Got endpoints: latency-svc-mlwvc [1.511772931s] Jan 30 13:18:27.940: INFO: Created: latency-svc-l879j Jan 30 13:18:27.981: INFO: Got endpoints: latency-svc-l879j [1.635392047s] Jan 30 13:18:27.986: INFO: Created: latency-svc-hg7n6 Jan 30 13:18:27.991: INFO: Got endpoints: latency-svc-hg7n6 [1.499524697s] Jan 30 13:18:28.040: INFO: Created: latency-svc-xfv7w Jan 30 13:18:28.194: INFO: Got endpoints: latency-svc-xfv7w [1.523649096s] Jan 30 13:18:28.216: INFO: Created: latency-svc-gzzbp Jan 30 13:18:28.233: INFO: Got endpoints: latency-svc-gzzbp [1.491980867s] Jan 30 13:18:28.293: INFO: Created: latency-svc-258f8 Jan 30 13:18:28.397: INFO: Got endpoints: latency-svc-258f8 [1.628578027s] Jan 30 13:18:28.404: INFO: Created: latency-svc-t9hwx Jan 30 13:18:28.415: INFO: Got endpoints: latency-svc-t9hwx [1.470452896s] Jan 30 13:18:28.477: INFO: Created: latency-svc-w7vl7 Jan 30 13:18:28.477: INFO: Got endpoints: latency-svc-w7vl7 [1.474958037s] Jan 30 13:18:28.612: INFO: Created: latency-svc-f55dj Jan 30 13:18:28.627: INFO: Got endpoints: latency-svc-f55dj [1.45953063s] Jan 30 13:18:28.661: INFO: Created: latency-svc-v87bl Jan 30 13:18:28.671: INFO: Got endpoints: latency-svc-v87bl [1.491910609s] Jan 30 13:18:28.711: INFO: Created: latency-svc-9gt6s Jan 30 13:18:28.838: INFO: Got endpoints: latency-svc-9gt6s [1.503206065s] Jan 30 13:18:28.848: INFO: Created: latency-svc-9g8hm Jan 30 13:18:28.921: INFO: Got endpoints: latency-svc-9g8hm [1.541736383s] Jan 30 13:18:28.932: INFO: Created: latency-svc-vgxkv Jan 30 13:18:28.947: INFO: Got endpoints: latency-svc-vgxkv [1.397089909s] Jan 30 13:18:29.084: INFO: Created: latency-svc-b6mmt Jan 30 13:18:29.152: INFO: Got endpoints: latency-svc-b6mmt [1.54045674s] Jan 30 13:18:29.313: INFO: Created: latency-svc-qb6mg Jan 30 13:18:29.324: INFO: Got endpoints: latency-svc-qb6mg [1.569579785s] Jan 30 13:18:29.378: INFO: Created: latency-svc-8rbw6 Jan 30 13:18:29.393: INFO: Got endpoints: latency-svc-8rbw6 [1.594858425s] Jan 30 13:18:29.549: INFO: Created: latency-svc-qvj5k Jan 30 13:18:29.561: INFO: Got endpoints: latency-svc-qvj5k [1.578693806s] Jan 30 13:18:29.595: INFO: Created: latency-svc-fpx2w Jan 30 13:18:29.604: INFO: Got endpoints: latency-svc-fpx2w [1.612859015s] Jan 30 13:18:29.724: INFO: Created: latency-svc-r8ktz Jan 30 13:18:29.737: INFO: Got endpoints: latency-svc-r8ktz [1.543160458s] Jan 30 13:18:29.815: INFO: Created: latency-svc-44psx Jan 30 13:18:29.816: INFO: Got endpoints: latency-svc-44psx [1.582145462s] Jan 30 13:18:29.954: INFO: Created: latency-svc-vbch5 Jan 30 13:18:29.987: INFO: Got endpoints: latency-svc-vbch5 [1.589732726s] Jan 30 13:18:30.052: INFO: Created: latency-svc-prd9j Jan 30 13:18:30.136: INFO: Got endpoints: latency-svc-prd9j [1.720932555s] Jan 30 13:18:30.172: INFO: Created: latency-svc-87krg Jan 30 13:18:30.179: INFO: Got endpoints: latency-svc-87krg [1.701146048s] Jan 30 13:18:30.205: INFO: Created: latency-svc-9784p Jan 30 13:18:30.210: INFO: Got endpoints: latency-svc-9784p [1.583290072s] Jan 30 13:18:30.307: INFO: Created: latency-svc-7x8tl Jan 30 13:18:30.345: INFO: Got endpoints: latency-svc-7x8tl [1.67403597s] Jan 30 13:18:30.376: INFO: Created: latency-svc-njhv8 Jan 30 13:18:30.379: INFO: Got endpoints: latency-svc-njhv8 [1.540718254s] Jan 30 13:18:30.646: INFO: Created: latency-svc-bdrpd Jan 30 13:18:30.652: INFO: Got endpoints: latency-svc-bdrpd [1.729783129s] Jan 30 13:18:30.704: INFO: Created: latency-svc-59xbs Jan 30 13:18:30.817: INFO: Got endpoints: latency-svc-59xbs [1.870198324s] Jan 30 13:18:30.840: INFO: Created: latency-svc-vm97v Jan 30 13:18:30.841: INFO: Got endpoints: latency-svc-vm97v [1.688510751s] Jan 30 13:18:31.004: INFO: Created: latency-svc-v7kzz Jan 30 13:18:31.012: INFO: Got endpoints: latency-svc-v7kzz [1.687386875s] Jan 30 13:18:31.057: INFO: Created: latency-svc-mjbfk Jan 30 13:18:31.086: INFO: Got endpoints: latency-svc-mjbfk [1.691723885s] Jan 30 13:18:31.197: INFO: Created: latency-svc-nftdc Jan 30 13:18:31.242: INFO: Got endpoints: latency-svc-nftdc [1.681175176s] Jan 30 13:18:31.244: INFO: Created: latency-svc-pndgd Jan 30 13:18:31.253: INFO: Got endpoints: latency-svc-pndgd [1.648623341s] Jan 30 13:18:31.419: INFO: Created: latency-svc-fmcpd Jan 30 13:18:31.457: INFO: Got endpoints: latency-svc-fmcpd [1.719361827s] Jan 30 13:18:31.630: INFO: Created: latency-svc-7v4wz Jan 30 13:18:31.646: INFO: Got endpoints: latency-svc-7v4wz [1.830036739s] Jan 30 13:18:31.815: INFO: Created: latency-svc-q8bp6 Jan 30 13:18:31.824: INFO: Got endpoints: latency-svc-q8bp6 [1.836190139s] Jan 30 13:18:32.030: INFO: Created: latency-svc-4m2hr Jan 30 13:18:32.060: INFO: Got endpoints: latency-svc-4m2hr [1.922948473s] Jan 30 13:18:32.065: INFO: Created: latency-svc-ccpt4 Jan 30 13:18:32.073: INFO: Got endpoints: latency-svc-ccpt4 [1.894628614s] Jan 30 13:18:32.213: INFO: Created: latency-svc-qklfk Jan 30 13:18:32.223: INFO: Got endpoints: latency-svc-qklfk [2.012581234s] Jan 30 13:18:32.261: INFO: Created: latency-svc-p959f Jan 30 13:18:32.294: INFO: Got endpoints: latency-svc-p959f [1.948265619s] Jan 30 13:18:32.368: INFO: Created: latency-svc-tkf4l Jan 30 13:18:32.383: INFO: Got endpoints: latency-svc-tkf4l [2.003709323s] Jan 30 13:18:32.419: INFO: Created: latency-svc-67mjs Jan 30 13:18:32.426: INFO: Got endpoints: latency-svc-67mjs [1.773873815s] Jan 30 13:18:32.554: INFO: Created: latency-svc-dbwq8 Jan 30 13:18:32.572: INFO: Created: latency-svc-mzhpz Jan 30 13:18:32.581: INFO: Got endpoints: latency-svc-dbwq8 [1.762718228s] Jan 30 13:18:32.587: INFO: Got endpoints: latency-svc-mzhpz [1.745527251s] Jan 30 13:18:32.621: INFO: Created: latency-svc-sf2fp Jan 30 13:18:32.624: INFO: Got endpoints: latency-svc-sf2fp [1.611559592s] Jan 30 13:18:32.719: INFO: Created: latency-svc-9npcm Jan 30 13:18:32.722: INFO: Got endpoints: latency-svc-9npcm [1.636120509s] Jan 30 13:18:32.768: INFO: Created: latency-svc-n62dw Jan 30 13:18:32.773: INFO: Got endpoints: latency-svc-n62dw [1.529995733s] Jan 30 13:18:32.905: INFO: Created: latency-svc-fwv5g Jan 30 13:18:32.931: INFO: Got endpoints: latency-svc-fwv5g [1.67850519s] Jan 30 13:18:32.972: INFO: Created: latency-svc-449lq Jan 30 13:18:33.112: INFO: Got endpoints: latency-svc-449lq [1.654586389s] Jan 30 13:18:33.127: INFO: Created: latency-svc-grknv Jan 30 13:18:33.135: INFO: Got endpoints: latency-svc-grknv [1.489439563s] Jan 30 13:18:33.187: INFO: Created: latency-svc-5fklp Jan 30 13:18:33.294: INFO: Got endpoints: latency-svc-5fklp [1.46995156s] Jan 30 13:18:33.303: INFO: Created: latency-svc-4kgvt Jan 30 13:18:33.317: INFO: Got endpoints: latency-svc-4kgvt [1.256088205s] Jan 30 13:18:33.358: INFO: Created: latency-svc-8svb2 Jan 30 13:18:33.380: INFO: Got endpoints: latency-svc-8svb2 [1.306091087s] Jan 30 13:18:33.460: INFO: Created: latency-svc-l4gfj Jan 30 13:18:33.461: INFO: Got endpoints: latency-svc-l4gfj [1.237104761s] Jan 30 13:18:33.481: INFO: Created: latency-svc-2rsfl Jan 30 13:18:33.492: INFO: Got endpoints: latency-svc-2rsfl [1.197652592s] Jan 30 13:18:33.518: INFO: Created: latency-svc-25trb Jan 30 13:18:33.534: INFO: Got endpoints: latency-svc-25trb [1.150830161s] Jan 30 13:18:33.673: INFO: Created: latency-svc-q2c8q Jan 30 13:18:33.688: INFO: Got endpoints: latency-svc-q2c8q [1.261721098s] Jan 30 13:18:33.738: INFO: Created: latency-svc-gtrvq Jan 30 13:18:33.852: INFO: Got endpoints: latency-svc-gtrvq [1.270870802s] Jan 30 13:18:33.889: INFO: Created: latency-svc-8dzbk Jan 30 13:18:33.924: INFO: Got endpoints: latency-svc-8dzbk [1.336760854s] Jan 30 13:18:34.474: INFO: Created: latency-svc-rdvr6 Jan 30 13:18:34.478: INFO: Got endpoints: latency-svc-rdvr6 [1.853392672s] Jan 30 13:18:34.643: INFO: Created: latency-svc-8dvcw Jan 30 13:18:34.656: INFO: Got endpoints: latency-svc-8dvcw [1.933140206s] Jan 30 13:18:34.825: INFO: Created: latency-svc-rzs4p Jan 30 13:18:34.834: INFO: Got endpoints: latency-svc-rzs4p [2.060358949s] Jan 30 13:18:34.932: INFO: Created: latency-svc-zxp7r Jan 30 13:18:35.080: INFO: Got endpoints: latency-svc-zxp7r [2.148799315s] Jan 30 13:18:35.120: INFO: Created: latency-svc-2tgr9 Jan 30 13:18:35.126: INFO: Got endpoints: latency-svc-2tgr9 [2.014074163s] Jan 30 13:18:35.181: INFO: Created: latency-svc-qrjfr Jan 30 13:18:35.245: INFO: Got endpoints: latency-svc-qrjfr [2.109062445s] Jan 30 13:18:35.267: INFO: Created: latency-svc-h8kdr Jan 30 13:18:35.280: INFO: Got endpoints: latency-svc-h8kdr [1.985217595s] Jan 30 13:18:35.324: INFO: Created: latency-svc-5xgrr Jan 30 13:18:35.339: INFO: Got endpoints: latency-svc-5xgrr [2.022199648s] Jan 30 13:18:35.410: INFO: Created: latency-svc-z4svc Jan 30 13:18:35.413: INFO: Got endpoints: latency-svc-z4svc [2.032428801s] Jan 30 13:18:35.450: INFO: Created: latency-svc-hsf6l Jan 30 13:18:35.460: INFO: Got endpoints: latency-svc-hsf6l [1.999179472s] Jan 30 13:18:35.487: INFO: Created: latency-svc-6svwg Jan 30 13:18:35.491: INFO: Got endpoints: latency-svc-6svwg [1.998583055s] Jan 30 13:18:35.651: INFO: Created: latency-svc-g7n6x Jan 30 13:18:35.658: INFO: Got endpoints: latency-svc-g7n6x [2.123137453s] Jan 30 13:18:35.844: INFO: Created: latency-svc-cgrtx Jan 30 13:18:35.957: INFO: Got endpoints: latency-svc-cgrtx [2.268705485s] Jan 30 13:18:35.979: INFO: Created: latency-svc-cqhz9 Jan 30 13:18:36.021: INFO: Got endpoints: latency-svc-cqhz9 [2.16843277s] Jan 30 13:18:36.025: INFO: Created: latency-svc-t5b5c Jan 30 13:18:36.111: INFO: Got endpoints: latency-svc-t5b5c [2.186406555s] Jan 30 13:18:36.124: INFO: Created: latency-svc-2b4xm Jan 30 13:18:36.132: INFO: Got endpoints: latency-svc-2b4xm [1.654062625s] Jan 30 13:18:36.169: INFO: Created: latency-svc-wpr88 Jan 30 13:18:36.274: INFO: Got endpoints: latency-svc-wpr88 [1.617653858s] Jan 30 13:18:36.282: INFO: Created: latency-svc-dvrns Jan 30 13:18:36.285: INFO: Got endpoints: latency-svc-dvrns [1.451063051s] Jan 30 13:18:36.338: INFO: Created: latency-svc-vctx8 Jan 30 13:18:36.439: INFO: Got endpoints: latency-svc-vctx8 [1.358185655s] Jan 30 13:18:36.439: INFO: Created: latency-svc-lj4qt Jan 30 13:18:36.446: INFO: Got endpoints: latency-svc-lj4qt [1.320015063s] Jan 30 13:18:36.510: INFO: Created: latency-svc-l55bc Jan 30 13:18:36.525: INFO: Got endpoints: latency-svc-l55bc [1.279535969s] Jan 30 13:18:36.610: INFO: Created: latency-svc-j48kz Jan 30 13:18:36.623: INFO: Got endpoints: latency-svc-j48kz [1.34320534s] Jan 30 13:18:36.656: INFO: Created: latency-svc-8d2pq Jan 30 13:18:36.672: INFO: Got endpoints: latency-svc-8d2pq [1.332102741s] Jan 30 13:18:36.771: INFO: Created: latency-svc-vvvls Jan 30 13:18:36.798: INFO: Got endpoints: latency-svc-vvvls [1.385601226s] Jan 30 13:18:36.799: INFO: Created: latency-svc-hpj5j Jan 30 13:18:36.813: INFO: Got endpoints: latency-svc-hpj5j [1.352509947s] Jan 30 13:18:36.857: INFO: Created: latency-svc-vzk8s Jan 30 13:18:36.935: INFO: Got endpoints: latency-svc-vzk8s [1.44397964s] Jan 30 13:18:36.993: INFO: Created: latency-svc-x8jfs Jan 30 13:18:37.003: INFO: Got endpoints: latency-svc-x8jfs [1.345075516s] Jan 30 13:18:37.094: INFO: Created: latency-svc-jn6bb Jan 30 13:18:37.099: INFO: Got endpoints: latency-svc-jn6bb [1.141150491s] Jan 30 13:18:37.137: INFO: Created: latency-svc-dc8hk Jan 30 13:18:37.155: INFO: Got endpoints: latency-svc-dc8hk [1.132892693s] Jan 30 13:18:37.288: INFO: Created: latency-svc-c2bb4 Jan 30 13:18:37.320: INFO: Got endpoints: latency-svc-c2bb4 [1.208729471s] Jan 30 13:18:37.324: INFO: Created: latency-svc-hntnb Jan 30 13:18:37.333: INFO: Got endpoints: latency-svc-hntnb [1.20056624s] Jan 30 13:18:37.441: INFO: Created: latency-svc-vsfnt Jan 30 13:18:37.454: INFO: Got endpoints: latency-svc-vsfnt [1.179405137s] Jan 30 13:18:37.501: INFO: Created: latency-svc-cvrct Jan 30 13:18:37.509: INFO: Got endpoints: latency-svc-cvrct [1.223779063s] Jan 30 13:18:37.509: INFO: Latencies: [179.834834ms 215.87923ms 265.29827ms 410.914471ms 458.820441ms 590.351852ms 595.740045ms 630.913361ms 778.6639ms 793.749281ms 986.732849ms 991.561317ms 1.045855935s 1.132892693s 1.141150491s 1.148688783s 1.150830161s 1.179405137s 1.196614188s 1.197652592s 1.20056624s 1.208729471s 1.216637648s 1.218895069s 1.223779063s 1.237104761s 1.256088205s 1.261721098s 1.270870802s 1.274068257s 1.279535969s 1.2961629s 1.306091087s 1.307825293s 1.309789009s 1.31353536s 1.316711474s 1.320015063s 1.330060847s 1.332102741s 1.333947598s 1.336760854s 1.337614585s 1.341016678s 1.34320534s 1.345075516s 1.352509947s 1.356553323s 1.358185655s 1.358312321s 1.360973196s 1.361928975s 1.365741885s 1.367737107s 1.376004163s 1.383519525s 1.385601226s 1.390330562s 1.394965151s 1.39591749s 1.397089909s 1.406245576s 1.408819231s 1.417384857s 1.418885515s 1.441643166s 1.44397964s 1.449887918s 1.451063051s 1.451378743s 1.455332981s 1.45953063s 1.460792379s 1.463838202s 1.465007393s 1.468154801s 1.468419861s 1.469154504s 1.46995156s 1.470024477s 1.470452896s 1.471620738s 1.471946448s 1.473571019s 1.474301691s 1.474958037s 1.478502919s 1.483119805s 1.483528645s 1.484635596s 1.488756774s 1.489439563s 1.490754394s 1.491746345s 1.491910609s 1.491980867s 1.492060658s 1.492717099s 1.494592456s 1.497477448s 1.499524697s 1.503206065s 1.507306925s 1.507539526s 1.511501254s 1.511772931s 1.513966757s 1.514445373s 1.514736475s 1.514950705s 1.519054842s 1.523649096s 1.529995733s 1.530332844s 1.540144424s 1.54045674s 1.540718254s 1.541337796s 1.541736383s 1.543160458s 1.550560924s 1.569579785s 1.575217709s 1.576855588s 1.578693806s 1.5808364s 1.582145462s 1.583290072s 1.584686826s 1.585018851s 1.589620622s 1.589732726s 1.594858425s 1.601623707s 1.611559592s 1.612859015s 1.616038189s 1.617434864s 1.617653858s 1.628578027s 1.635392047s 1.636120509s 1.641198414s 1.648623341s 1.654062625s 1.654586389s 1.67403597s 1.67850519s 1.678869115s 1.681175176s 1.687386875s 1.688510751s 1.691723885s 1.701146048s 1.719361827s 1.720932555s 1.729783129s 1.737007338s 1.745527251s 1.748676841s 1.755209987s 1.762718228s 1.773873815s 1.785829574s 1.830036739s 1.836190139s 1.853392672s 1.856362694s 1.870198324s 1.894628614s 1.90224468s 1.904661914s 1.912332975s 1.922948473s 1.933140206s 1.948265619s 1.952471818s 1.97572275s 1.979533819s 1.985217595s 1.996149687s 1.998583055s 1.999179472s 2.000327484s 2.001498068s 2.003709323s 2.012581234s 2.014074163s 2.017351116s 2.019654365s 2.022199648s 2.032428801s 2.047682797s 2.060358949s 2.109062445s 2.123137453s 2.148799315s 2.16843277s 2.186406555s 2.268705485s] Jan 30 13:18:37.510: INFO: 50 %ile: 1.499524697s Jan 30 13:18:37.510: INFO: 90 %ile: 1.996149687s Jan 30 13:18:37.510: INFO: 99 %ile: 2.186406555s Jan 30 13:18:37.510: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:18:37.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2173" for this suite. Jan 30 13:19:35.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:19:35.718: INFO: namespace svc-latency-2173 deletion completed in 58.202091402s • [SLOW TEST:89.388 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:19:35.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-bpbw STEP: Creating a pod to test atomic-volume-subpath Jan 30 13:19:35.833: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bpbw" in namespace "subpath-6605" to be "success or failure" Jan 30 13:19:35.841: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Pending", Reason="", readiness=false. Elapsed: 7.970215ms Jan 30 13:19:37.854: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020788036s Jan 30 13:19:39.878: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044160446s Jan 30 13:19:41.889: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055603678s Jan 30 13:19:43.900: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06646404s Jan 30 13:19:45.915: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Running", Reason="", readiness=true. Elapsed: 10.081770109s Jan 30 13:19:47.930: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Running", Reason="", readiness=true. Elapsed: 12.096231876s Jan 30 13:19:49.941: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Running", Reason="", readiness=true. Elapsed: 14.107161832s Jan 30 13:19:51.958: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Running", Reason="", readiness=true. Elapsed: 16.124120961s Jan 30 13:19:53.976: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Running", Reason="", readiness=true. Elapsed: 18.142110879s Jan 30 13:19:55.989: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Running", Reason="", readiness=true. Elapsed: 20.155052692s Jan 30 13:19:58.000: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Running", Reason="", readiness=true. Elapsed: 22.166557441s Jan 30 13:20:00.010: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Running", Reason="", readiness=true. Elapsed: 24.176564431s Jan 30 13:20:02.023: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Running", Reason="", readiness=true. Elapsed: 26.189711001s Jan 30 13:20:04.040: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Running", Reason="", readiness=true. Elapsed: 28.207018356s Jan 30 13:20:06.049: INFO: Pod "pod-subpath-test-downwardapi-bpbw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.215429123s STEP: Saw pod success Jan 30 13:20:06.049: INFO: Pod "pod-subpath-test-downwardapi-bpbw" satisfied condition "success or failure" Jan 30 13:20:06.053: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-bpbw container test-container-subpath-downwardapi-bpbw: STEP: delete the pod Jan 30 13:20:06.251: INFO: Waiting for pod pod-subpath-test-downwardapi-bpbw to disappear Jan 30 13:20:06.264: INFO: Pod pod-subpath-test-downwardapi-bpbw no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-bpbw Jan 30 13:20:06.264: INFO: Deleting pod "pod-subpath-test-downwardapi-bpbw" in namespace "subpath-6605" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:20:06.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6605" for this suite. Jan 30 13:20:12.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:20:12.518: INFO: namespace subpath-6605 deletion completed in 6.240181181s • [SLOW TEST:36.799 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:20:12.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 30 13:20:23.561: INFO: Successfully updated pod "labelsupdate1ce265c4-343e-4776-9418-bb741d3f2e7e" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:20:25.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3464" for this suite. Jan 30 13:20:47.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:20:47.979: INFO: namespace downward-api-3464 deletion completed in 22.24405668s • [SLOW TEST:35.460 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:20:47.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2035 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-2035 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2035 Jan 30 13:20:48.175: INFO: Found 0 stateful pods, waiting for 1 Jan 30 13:20:58.186: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 30 13:20:58.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 30 13:21:01.181: INFO: stderr: "I0130 13:21:00.606595 1663 log.go:172] (0xc000bfe420) (0xc00074e780) Create stream\nI0130 13:21:00.607007 1663 log.go:172] (0xc000bfe420) (0xc00074e780) Stream added, broadcasting: 1\nI0130 13:21:00.676856 1663 log.go:172] (0xc000bfe420) Reply frame received for 1\nI0130 13:21:00.677133 1663 log.go:172] (0xc000bfe420) (0xc0006661e0) Create stream\nI0130 13:21:00.677181 1663 log.go:172] (0xc000bfe420) (0xc0006661e0) Stream added, broadcasting: 3\nI0130 13:21:00.680193 1663 log.go:172] (0xc000bfe420) Reply frame received for 3\nI0130 13:21:00.680233 1663 log.go:172] (0xc000bfe420) (0xc00074e820) Create stream\nI0130 13:21:00.680244 1663 log.go:172] (0xc000bfe420) (0xc00074e820) Stream added, broadcasting: 5\nI0130 13:21:00.683993 1663 log.go:172] (0xc000bfe420) Reply frame received for 5\nI0130 13:21:00.965907 1663 log.go:172] (0xc000bfe420) Data frame received for 5\nI0130 13:21:00.966004 1663 log.go:172] (0xc00074e820) (5) Data frame handling\nI0130 13:21:00.966032 1663 log.go:172] (0xc00074e820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0130 13:21:01.024924 1663 log.go:172] (0xc000bfe420) Data frame received for 3\nI0130 13:21:01.025090 1663 log.go:172] (0xc0006661e0) (3) Data frame handling\nI0130 13:21:01.025132 1663 log.go:172] (0xc0006661e0) (3) Data frame sent\nI0130 13:21:01.161906 1663 log.go:172] (0xc000bfe420) Data frame received for 1\nI0130 13:21:01.162254 1663 log.go:172] (0xc00074e780) (1) Data frame handling\nI0130 13:21:01.163417 1663 log.go:172] (0xc00074e780) (1) Data frame sent\nI0130 13:21:01.164552 1663 log.go:172] (0xc000bfe420) (0xc00074e780) Stream removed, broadcasting: 1\nI0130 13:21:01.165075 1663 log.go:172] (0xc000bfe420) (0xc0006661e0) Stream removed, broadcasting: 3\nI0130 13:21:01.165448 1663 log.go:172] (0xc000bfe420) (0xc00074e820) Stream removed, broadcasting: 5\nI0130 13:21:01.165575 1663 log.go:172] (0xc000bfe420) Go away received\nI0130 13:21:01.166842 1663 log.go:172] (0xc000bfe420) (0xc00074e780) Stream removed, broadcasting: 1\nI0130 13:21:01.166872 1663 log.go:172] (0xc000bfe420) (0xc0006661e0) Stream removed, broadcasting: 3\nI0130 13:21:01.166890 1663 log.go:172] (0xc000bfe420) (0xc00074e820) Stream removed, broadcasting: 5\n" Jan 30 13:21:01.181: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 30 13:21:01.181: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 30 13:21:01.189: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 30 13:21:11.199: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 30 13:21:11.200: INFO: Waiting for statefulset status.replicas updated to 0 Jan 30 13:21:11.231: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 13:21:11.231: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC }] Jan 30 13:21:11.231: INFO: Jan 30 13:21:11.231: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 30 13:21:13.058: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990302811s Jan 30 13:21:14.390: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.161639415s Jan 30 13:21:15.424: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.830916344s Jan 30 13:21:16.443: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.797072165s Jan 30 13:21:17.667: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.776208983s Jan 30 13:21:19.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.553588619s Jan 30 13:21:20.557: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.133434878s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2035 Jan 30 13:21:21.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:21:22.453: INFO: stderr: "I0130 13:21:21.919523 1698 log.go:172] (0xc000116dc0) (0xc00054e780) Create stream\nI0130 13:21:21.920043 1698 log.go:172] (0xc000116dc0) (0xc00054e780) Stream added, broadcasting: 1\nI0130 13:21:21.957244 1698 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0130 13:21:21.957619 1698 log.go:172] (0xc000116dc0) (0xc0007f6000) Create stream\nI0130 13:21:21.957669 1698 log.go:172] (0xc000116dc0) (0xc0007f6000) Stream added, broadcasting: 3\nI0130 13:21:21.966681 1698 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0130 13:21:21.966885 1698 log.go:172] (0xc000116dc0) (0xc0007f2000) Create stream\nI0130 13:21:21.966916 1698 log.go:172] (0xc000116dc0) (0xc0007f2000) Stream added, broadcasting: 5\nI0130 13:21:21.971274 1698 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0130 13:21:22.224144 1698 log.go:172] (0xc000116dc0) Data frame received for 3\nI0130 13:21:22.224529 1698 log.go:172] (0xc000116dc0) Data frame received for 5\nI0130 13:21:22.224580 1698 log.go:172] (0xc0007f2000) (5) Data frame handling\nI0130 13:21:22.224640 1698 log.go:172] (0xc0007f2000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0130 13:21:22.224719 1698 log.go:172] (0xc0007f6000) (3) Data frame handling\nI0130 13:21:22.224760 1698 log.go:172] (0xc0007f6000) (3) Data frame sent\nI0130 13:21:22.422298 1698 log.go:172] (0xc000116dc0) Data frame received for 1\nI0130 13:21:22.422500 1698 log.go:172] (0xc000116dc0) (0xc0007f6000) Stream removed, broadcasting: 3\nI0130 13:21:22.422690 1698 log.go:172] (0xc000116dc0) (0xc0007f2000) Stream removed, broadcasting: 5\nI0130 13:21:22.422803 1698 log.go:172] (0xc00054e780) (1) Data frame handling\nI0130 13:21:22.422862 1698 log.go:172] (0xc00054e780) (1) Data frame sent\nI0130 13:21:22.422875 1698 log.go:172] (0xc000116dc0) (0xc00054e780) Stream removed, broadcasting: 1\nI0130 13:21:22.422933 1698 log.go:172] (0xc000116dc0) Go away received\nI0130 13:21:22.424976 1698 log.go:172] (0xc000116dc0) (0xc00054e780) Stream removed, broadcasting: 1\nI0130 13:21:22.425014 1698 log.go:172] (0xc000116dc0) (0xc0007f6000) Stream removed, broadcasting: 3\nI0130 13:21:22.425023 1698 log.go:172] (0xc000116dc0) (0xc0007f2000) Stream removed, broadcasting: 5\n" Jan 30 13:21:22.453: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 30 13:21:22.453: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 30 13:21:22.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:21:23.009: INFO: stderr: "I0130 13:21:22.725283 1718 log.go:172] (0xc0008d20b0) (0xc00081a6e0) Create stream\nI0130 13:21:22.725754 1718 log.go:172] (0xc0008d20b0) (0xc00081a6e0) Stream added, broadcasting: 1\nI0130 13:21:22.744298 1718 log.go:172] (0xc0008d20b0) Reply frame received for 1\nI0130 13:21:22.744467 1718 log.go:172] (0xc0008d20b0) (0xc0004d6140) Create stream\nI0130 13:21:22.744489 1718 log.go:172] (0xc0008d20b0) (0xc0004d6140) Stream added, broadcasting: 3\nI0130 13:21:22.747826 1718 log.go:172] (0xc0008d20b0) Reply frame received for 3\nI0130 13:21:22.747850 1718 log.go:172] (0xc0008d20b0) (0xc00081a780) Create stream\nI0130 13:21:22.747856 1718 log.go:172] (0xc0008d20b0) (0xc00081a780) Stream added, broadcasting: 5\nI0130 13:21:22.749502 1718 log.go:172] (0xc0008d20b0) Reply frame received for 5\nI0130 13:21:22.890710 1718 log.go:172] (0xc0008d20b0) Data frame received for 5\nI0130 13:21:22.890841 1718 log.go:172] (0xc00081a780) (5) Data frame handling\nI0130 13:21:22.890915 1718 log.go:172] (0xc00081a780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0130 13:21:22.893149 1718 log.go:172] (0xc0008d20b0) Data frame received for 3\nI0130 13:21:22.893191 1718 log.go:172] (0xc0004d6140) (3) Data frame handling\nI0130 13:21:22.893203 1718 log.go:172] (0xc0004d6140) (3) Data frame sent\nI0130 13:21:22.893242 1718 log.go:172] (0xc0008d20b0) Data frame received for 5\nI0130 13:21:22.893248 1718 log.go:172] (0xc00081a780) (5) Data frame handling\nI0130 13:21:22.893253 1718 log.go:172] (0xc00081a780) (5) Data frame sent\nI0130 13:21:22.893258 1718 log.go:172] (0xc0008d20b0) Data frame received for 5\nI0130 13:21:22.893262 1718 log.go:172] (0xc00081a780) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0130 13:21:22.893273 1718 log.go:172] (0xc00081a780) (5) Data frame sent\nI0130 13:21:22.998161 1718 log.go:172] (0xc0008d20b0) (0xc00081a780) Stream removed, broadcasting: 5\nI0130 13:21:22.998301 1718 log.go:172] (0xc0008d20b0) Data frame received for 1\nI0130 13:21:22.998330 1718 log.go:172] (0xc0008d20b0) (0xc0004d6140) Stream removed, broadcasting: 3\nI0130 13:21:22.998375 1718 log.go:172] (0xc00081a6e0) (1) Data frame handling\nI0130 13:21:22.998411 1718 log.go:172] (0xc00081a6e0) (1) Data frame sent\nI0130 13:21:22.998422 1718 log.go:172] (0xc0008d20b0) (0xc00081a6e0) Stream removed, broadcasting: 1\nI0130 13:21:22.998440 1718 log.go:172] (0xc0008d20b0) Go away received\nI0130 13:21:22.999443 1718 log.go:172] (0xc0008d20b0) (0xc00081a6e0) Stream removed, broadcasting: 1\nI0130 13:21:22.999457 1718 log.go:172] (0xc0008d20b0) (0xc0004d6140) Stream removed, broadcasting: 3\nI0130 13:21:22.999465 1718 log.go:172] (0xc0008d20b0) (0xc00081a780) Stream removed, broadcasting: 5\n" Jan 30 13:21:23.010: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 30 13:21:23.010: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 30 13:21:23.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:21:23.803: INFO: stderr: "I0130 13:21:23.311158 1737 log.go:172] (0xc0009134a0) (0xc0008fd040) Create stream\nI0130 13:21:23.311586 1737 log.go:172] (0xc0009134a0) (0xc0008fd040) Stream added, broadcasting: 1\nI0130 13:21:23.334100 1737 log.go:172] (0xc0009134a0) Reply frame received for 1\nI0130 13:21:23.334237 1737 log.go:172] (0xc0009134a0) (0xc000874000) Create stream\nI0130 13:21:23.334265 1737 log.go:172] (0xc0009134a0) (0xc000874000) Stream added, broadcasting: 3\nI0130 13:21:23.336506 1737 log.go:172] (0xc0009134a0) Reply frame received for 3\nI0130 13:21:23.336552 1737 log.go:172] (0xc0009134a0) (0xc0008fc000) Create stream\nI0130 13:21:23.336561 1737 log.go:172] (0xc0009134a0) (0xc0008fc000) Stream added, broadcasting: 5\nI0130 13:21:23.339771 1737 log.go:172] (0xc0009134a0) Reply frame received for 5\nI0130 13:21:23.481676 1737 log.go:172] (0xc0009134a0) Data frame received for 5\nI0130 13:21:23.481852 1737 log.go:172] (0xc0008fc000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0130 13:21:23.481975 1737 log.go:172] (0xc0009134a0) Data frame received for 3\nI0130 13:21:23.482175 1737 log.go:172] (0xc000874000) (3) Data frame handling\nI0130 13:21:23.482200 1737 log.go:172] (0xc000874000) (3) Data frame sent\nI0130 13:21:23.482300 1737 log.go:172] (0xc0008fc000) (5) Data frame sent\nI0130 13:21:23.775039 1737 log.go:172] (0xc0009134a0) Data frame received for 1\nI0130 13:21:23.775370 1737 log.go:172] (0xc0008fd040) (1) Data frame handling\nI0130 13:21:23.775469 1737 log.go:172] (0xc0008fd040) (1) Data frame sent\nI0130 13:21:23.775932 1737 log.go:172] (0xc0009134a0) (0xc0008fc000) Stream removed, broadcasting: 5\nI0130 13:21:23.776142 1737 log.go:172] (0xc0009134a0) (0xc0008fd040) Stream removed, broadcasting: 1\nI0130 13:21:23.777809 1737 log.go:172] (0xc0009134a0) (0xc000874000) Stream removed, broadcasting: 3\nI0130 13:21:23.778154 1737 log.go:172] (0xc0009134a0) (0xc0008fd040) Stream removed, broadcasting: 1\nI0130 13:21:23.778226 1737 log.go:172] (0xc0009134a0) (0xc000874000) Stream removed, broadcasting: 3\nI0130 13:21:23.778309 1737 log.go:172] (0xc0009134a0) (0xc0008fc000) Stream removed, broadcasting: 5\nI0130 13:21:23.778390 1737 log.go:172] (0xc0009134a0) Go away received\n" Jan 30 13:21:23.803: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 30 13:21:23.803: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 30 13:21:23.812: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 13:21:23.812: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 13:21:23.812: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 30 13:21:23.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 30 13:21:24.359: INFO: stderr: "I0130 13:21:24.057933 1758 log.go:172] (0xc000138840) (0xc0005363c0) Create stream\nI0130 13:21:24.058425 1758 log.go:172] (0xc000138840) (0xc0005363c0) Stream added, broadcasting: 1\nI0130 13:21:24.073568 1758 log.go:172] (0xc000138840) Reply frame received for 1\nI0130 13:21:24.073689 1758 log.go:172] (0xc000138840) (0xc0008fe000) Create stream\nI0130 13:21:24.073711 1758 log.go:172] (0xc000138840) (0xc0008fe000) Stream added, broadcasting: 3\nI0130 13:21:24.077758 1758 log.go:172] (0xc000138840) Reply frame received for 3\nI0130 13:21:24.077804 1758 log.go:172] (0xc000138840) (0xc00080a000) Create stream\nI0130 13:21:24.077815 1758 log.go:172] (0xc000138840) (0xc00080a000) Stream added, broadcasting: 5\nI0130 13:21:24.079390 1758 log.go:172] (0xc000138840) Reply frame received for 5\nI0130 13:21:24.207806 1758 log.go:172] (0xc000138840) Data frame received for 5\nI0130 13:21:24.207967 1758 log.go:172] (0xc00080a000) (5) Data frame handling\nI0130 13:21:24.207986 1758 log.go:172] (0xc00080a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0130 13:21:24.208015 1758 log.go:172] (0xc000138840) Data frame received for 3\nI0130 13:21:24.208020 1758 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0130 13:21:24.208035 1758 log.go:172] (0xc0008fe000) (3) Data frame sent\nI0130 13:21:24.347366 1758 log.go:172] (0xc000138840) Data frame received for 1\nI0130 13:21:24.347569 1758 log.go:172] (0xc000138840) (0xc0008fe000) Stream removed, broadcasting: 3\nI0130 13:21:24.347632 1758 log.go:172] (0xc0005363c0) (1) Data frame handling\nI0130 13:21:24.347676 1758 log.go:172] (0xc0005363c0) (1) Data frame sent\nI0130 13:21:24.347705 1758 log.go:172] (0xc000138840) (0xc00080a000) Stream removed, broadcasting: 5\nI0130 13:21:24.347745 1758 log.go:172] (0xc000138840) (0xc0005363c0) Stream removed, broadcasting: 1\nI0130 13:21:24.347767 1758 log.go:172] (0xc000138840) Go away received\nI0130 13:21:24.349044 1758 log.go:172] (0xc000138840) (0xc0005363c0) Stream removed, broadcasting: 1\nI0130 13:21:24.349087 1758 log.go:172] (0xc000138840) (0xc0008fe000) Stream removed, broadcasting: 3\nI0130 13:21:24.349097 1758 log.go:172] (0xc000138840) (0xc00080a000) Stream removed, broadcasting: 5\n" Jan 30 13:21:24.359: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 30 13:21:24.359: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 30 13:21:24.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 30 13:21:24.785: INFO: stderr: "I0130 13:21:24.605019 1776 log.go:172] (0xc000950370) (0xc000302780) Create stream\nI0130 13:21:24.605620 1776 log.go:172] (0xc000950370) (0xc000302780) Stream added, broadcasting: 1\nI0130 13:21:24.612178 1776 log.go:172] (0xc000950370) Reply frame received for 1\nI0130 13:21:24.612284 1776 log.go:172] (0xc000950370) (0xc000302820) Create stream\nI0130 13:21:24.612306 1776 log.go:172] (0xc000950370) (0xc000302820) Stream added, broadcasting: 3\nI0130 13:21:24.613655 1776 log.go:172] (0xc000950370) Reply frame received for 3\nI0130 13:21:24.613698 1776 log.go:172] (0xc000950370) (0xc000832000) Create stream\nI0130 13:21:24.613723 1776 log.go:172] (0xc000950370) (0xc000832000) Stream added, broadcasting: 5\nI0130 13:21:24.615646 1776 log.go:172] (0xc000950370) Reply frame received for 5\nI0130 13:21:24.684254 1776 log.go:172] (0xc000950370) Data frame received for 5\nI0130 13:21:24.684318 1776 log.go:172] (0xc000832000) (5) Data frame handling\nI0130 13:21:24.684356 1776 log.go:172] (0xc000832000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0130 13:21:24.702496 1776 log.go:172] (0xc000950370) Data frame received for 3\nI0130 13:21:24.702514 1776 log.go:172] (0xc000302820) (3) Data frame handling\nI0130 13:21:24.702524 1776 log.go:172] (0xc000302820) (3) Data frame sent\nI0130 13:21:24.772448 1776 log.go:172] (0xc000950370) (0xc000302820) Stream removed, broadcasting: 3\nI0130 13:21:24.772714 1776 log.go:172] (0xc000950370) Data frame received for 1\nI0130 13:21:24.773092 1776 log.go:172] (0xc000950370) (0xc000832000) Stream removed, broadcasting: 5\nI0130 13:21:24.773492 1776 log.go:172] (0xc000302780) (1) Data frame handling\nI0130 13:21:24.773888 1776 log.go:172] (0xc000302780) (1) Data frame sent\nI0130 13:21:24.774083 1776 log.go:172] (0xc000950370) (0xc000302780) Stream removed, broadcasting: 1\nI0130 13:21:24.774164 1776 log.go:172] (0xc000950370) Go away received\nI0130 13:21:24.776058 1776 log.go:172] (0xc000950370) (0xc000302780) Stream removed, broadcasting: 1\nI0130 13:21:24.776131 1776 log.go:172] (0xc000950370) (0xc000302820) Stream removed, broadcasting: 3\nI0130 13:21:24.776169 1776 log.go:172] (0xc000950370) (0xc000832000) Stream removed, broadcasting: 5\n" Jan 30 13:21:24.785: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 30 13:21:24.785: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 30 13:21:24.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 30 13:21:25.425: INFO: stderr: "I0130 13:21:25.014658 1797 log.go:172] (0xc0008bc420) (0xc0009d0640) Create stream\nI0130 13:21:25.014876 1797 log.go:172] (0xc0008bc420) (0xc0009d0640) Stream added, broadcasting: 1\nI0130 13:21:25.024933 1797 log.go:172] (0xc0008bc420) Reply frame received for 1\nI0130 13:21:25.025017 1797 log.go:172] (0xc0008bc420) (0xc0005b2280) Create stream\nI0130 13:21:25.025036 1797 log.go:172] (0xc0008bc420) (0xc0005b2280) Stream added, broadcasting: 3\nI0130 13:21:25.027044 1797 log.go:172] (0xc0008bc420) Reply frame received for 3\nI0130 13:21:25.027092 1797 log.go:172] (0xc0008bc420) (0xc0007a6000) Create stream\nI0130 13:21:25.027108 1797 log.go:172] (0xc0008bc420) (0xc0007a6000) Stream added, broadcasting: 5\nI0130 13:21:25.028693 1797 log.go:172] (0xc0008bc420) Reply frame received for 5\nI0130 13:21:25.180717 1797 log.go:172] (0xc0008bc420) Data frame received for 5\nI0130 13:21:25.180932 1797 log.go:172] (0xc0007a6000) (5) Data frame handling\nI0130 13:21:25.180978 1797 log.go:172] (0xc0007a6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0130 13:21:25.234896 1797 log.go:172] (0xc0008bc420) Data frame received for 3\nI0130 13:21:25.235098 1797 log.go:172] (0xc0005b2280) (3) Data frame handling\nI0130 13:21:25.235141 1797 log.go:172] (0xc0005b2280) (3) Data frame sent\nI0130 13:21:25.398353 1797 log.go:172] (0xc0008bc420) (0xc0007a6000) Stream removed, broadcasting: 5\nI0130 13:21:25.398920 1797 log.go:172] (0xc0008bc420) Data frame received for 1\nI0130 13:21:25.399039 1797 log.go:172] (0xc0008bc420) (0xc0005b2280) Stream removed, broadcasting: 3\nI0130 13:21:25.399183 1797 log.go:172] (0xc0009d0640) (1) Data frame handling\nI0130 13:21:25.399330 1797 log.go:172] (0xc0009d0640) (1) Data frame sent\nI0130 13:21:25.399753 1797 log.go:172] (0xc0008bc420) (0xc0009d0640) Stream removed, broadcasting: 1\nI0130 13:21:25.401200 1797 log.go:172] (0xc0008bc420) (0xc0009d0640) Stream removed, broadcasting: 1\nI0130 13:21:25.401230 1797 log.go:172] (0xc0008bc420) (0xc0005b2280) Stream removed, broadcasting: 3\nI0130 13:21:25.401248 1797 log.go:172] (0xc0008bc420) (0xc0007a6000) Stream removed, broadcasting: 5\n" Jan 30 13:21:25.425: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 30 13:21:25.425: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 30 13:21:25.425: INFO: Waiting for statefulset status.replicas updated to 0 Jan 30 13:21:25.459: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 30 13:21:35.476: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 30 13:21:35.476: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 30 13:21:35.476: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 30 13:21:35.497: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 13:21:35.497: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC }] Jan 30 13:21:35.497: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC }] Jan 30 13:21:35.497: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC }] Jan 30 13:21:35.497: INFO: Jan 30 13:21:35.497: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 30 13:21:37.061: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 13:21:37.061: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC }] Jan 30 13:21:37.062: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC }] Jan 30 13:21:37.062: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC }] Jan 30 13:21:37.062: INFO: Jan 30 13:21:37.062: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 30 13:21:38.071: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 13:21:38.071: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC }] Jan 30 13:21:38.071: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC }] Jan 30 13:21:38.071: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC }] Jan 30 13:21:38.071: INFO: Jan 30 13:21:38.071: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 30 13:21:39.085: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 13:21:39.086: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC }] Jan 30 13:21:39.086: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC }] Jan 30 13:21:39.086: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC }] Jan 30 13:21:39.086: INFO: Jan 30 13:21:39.086: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 30 13:21:40.098: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 13:21:40.098: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC }] Jan 30 13:21:40.098: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC }] Jan 30 13:21:40.098: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC }] Jan 30 13:21:40.099: INFO: Jan 30 13:21:40.099: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 30 13:21:41.115: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 13:21:41.115: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC }] Jan 30 13:21:41.115: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC }] Jan 30 13:21:41.115: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC }] Jan 30 13:21:41.115: INFO: Jan 30 13:21:41.115: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 30 13:21:42.159: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 13:21:42.159: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC }] Jan 30 13:21:42.159: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC }] Jan 30 13:21:42.159: INFO: Jan 30 13:21:42.159: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 30 13:21:43.217: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 13:21:43.217: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC }] Jan 30 13:21:43.218: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:11 +0000 UTC }] Jan 30 13:21:43.218: INFO: Jan 30 13:21:43.218: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 30 13:21:44.232: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 13:21:44.232: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC }] Jan 30 13:21:44.232: INFO: Jan 30 13:21:44.232: INFO: StatefulSet ss has not reached scale 0, at 1 Jan 30 13:21:45.242: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 13:21:45.242: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:21:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:20:48 +0000 UTC }] Jan 30 13:21:45.242: INFO: Jan 30 13:21:45.242: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2035 Jan 30 13:21:46.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:21:46.538: INFO: rc: 1 Jan 30 13:21:46.539: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001cafbf0 exit status 1 true [0xc001cb48b8 0xc001cb48f8 0xc001cb4930] [0xc001cb48b8 0xc001cb48f8 0xc001cb4930] [0xc001cb48e0 0xc001cb4920] [0xba6c50 0xba6c50] 0xc0021cac00 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 30 13:21:56.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:21:56.715: INFO: rc: 1 Jan 30 13:21:56.716: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002380960 exit status 1 true [0xc00238a308 0xc00238a320 0xc00238a338] [0xc00238a308 0xc00238a320 0xc00238a338] [0xc00238a318 0xc00238a330] [0xba6c50 0xba6c50] 0xc0020c0de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:22:06.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:22:06.876: INFO: rc: 1 Jan 30 13:22:06.877: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00243cae0 exit status 1 true [0xc002dd4490 0xc002dd44d8 0xc002dd44f0] [0xc002dd4490 0xc002dd44d8 0xc002dd44f0] [0xc002dd44a0 0xc002dd44e8] [0xba6c50 0xba6c50] 0xc001e5e960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:22:16.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:22:17.089: INFO: rc: 1 Jan 30 13:22:17.089: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00243cba0 exit status 1 true [0xc002dd4510 0xc002dd4538 0xc002dd4578] [0xc002dd4510 0xc002dd4538 0xc002dd4578] [0xc002dd4530 0xc002dd4560] [0xba6c50 0xba6c50] 0xc001e5eea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:22:27.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:22:27.236: INFO: rc: 1 Jan 30 13:22:27.236: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00243cc60 exit status 1 true [0xc002dd4580 0xc002dd45b0 0xc002dd45d8] [0xc002dd4580 0xc002dd45b0 0xc002dd45d8] [0xc002dd4590 0xc002dd45d0] [0xba6c50 0xba6c50] 0xc001e5f3e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:22:37.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:22:37.440: INFO: rc: 1 Jan 30 13:22:37.440: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00244e090 exit status 1 true [0xc000186040 0xc001cb4010 0xc001cb4028] [0xc000186040 0xc001cb4010 0xc001cb4028] [0xc001cb4008 0xc001cb4020] [0xba6c50 0xba6c50] 0xc0026b6660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:22:47.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:22:47.656: INFO: rc: 1 Jan 30 13:22:47.657: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ed6090 exit status 1 true [0xc002dd4000 0xc002dd4018 0xc002dd4030] [0xc002dd4000 0xc002dd4018 0xc002dd4030] [0xc002dd4010 0xc002dd4028] [0xba6c50 0xba6c50] 0xc0026d2240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:22:57.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:22:57.859: INFO: rc: 1 Jan 30 13:22:57.860: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002ad8090 exit status 1 true [0xc00203a008 0xc00203a040 0xc00203a0a0] [0xc00203a008 0xc00203a040 0xc00203a0a0] [0xc00203a038 0xc00203a088] [0xba6c50 0xba6c50] 0xc00274af00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:23:07.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:23:08.089: INFO: rc: 1 Jan 30 13:23:08.090: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002ad8150 exit status 1 true [0xc00203a0b8 0xc00203a100 0xc00203a160] [0xc00203a0b8 0xc00203a100 0xc00203a160] [0xc00203a0f8 0xc00203a140] [0xba6c50 0xba6c50] 0xc00274bce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:23:18.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:23:18.268: INFO: rc: 1 Jan 30 13:23:18.269: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cac120 exit status 1 true [0xc002fc8000 0xc002fc8018 0xc002fc8030] [0xc002fc8000 0xc002fc8018 0xc002fc8030] [0xc002fc8010 0xc002fc8028] [0xba6c50 0xba6c50] 0xc0022d94a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:23:28.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:23:28.468: INFO: rc: 1 Jan 30 13:23:28.469: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ed6150 exit status 1 true [0xc002dd4038 0xc002dd4050 0xc002dd4068] [0xc002dd4038 0xc002dd4050 0xc002dd4068] [0xc002dd4048 0xc002dd4060] [0xba6c50 0xba6c50] 0xc0026d2600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:23:38.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:23:38.846: INFO: rc: 1 Jan 30 13:23:38.846: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002ad8270 exit status 1 true [0xc00203a170 0xc00203a190 0xc00203a1c0] [0xc00203a170 0xc00203a190 0xc00203a1c0] [0xc00203a180 0xc00203a1b0] [0xba6c50 0xba6c50] 0xc002dc3740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:23:48.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:23:49.065: INFO: rc: 1 Jan 30 13:23:49.065: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00244e1e0 exit status 1 true [0xc001cb4030 0xc001cb4048 0xc001cb4060] [0xc001cb4030 0xc001cb4048 0xc001cb4060] [0xc001cb4040 0xc001cb4058] [0xba6c50 0xba6c50] 0xc0026b7500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:23:59.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:23:59.235: INFO: rc: 1 Jan 30 13:23:59.235: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cac240 exit status 1 true [0xc002fc8038 0xc002fc8050 0xc002fc8068] [0xc002fc8038 0xc002fc8050 0xc002fc8068] [0xc002fc8048 0xc002fc8060] [0xba6c50 0xba6c50] 0xc0016afec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:24:09.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:24:09.419: INFO: rc: 1 Jan 30 13:24:09.420: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cac300 exit status 1 true [0xc002fc8070 0xc002fc80b0 0xc002fc80d8] [0xc002fc8070 0xc002fc80b0 0xc002fc80d8] [0xc002fc80a0 0xc002fc80d0] [0xba6c50 0xba6c50] 0xc002b14de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:24:19.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:24:19.595: INFO: rc: 1 Jan 30 13:24:19.596: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00244e330 exit status 1 true [0xc001cb4068 0xc001cb4080 0xc001cb4098] [0xc001cb4068 0xc001cb4080 0xc001cb4098] [0xc001cb4078 0xc001cb4090] [0xba6c50 0xba6c50] 0xc0026b7b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:24:29.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:24:29.777: INFO: rc: 1 Jan 30 13:24:29.778: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cac3f0 exit status 1 true [0xc002fc80f0 0xc002fc8118 0xc002fc8150] [0xc002fc80f0 0xc002fc8118 0xc002fc8150] [0xc002fc8110 0xc002fc8148] [0xba6c50 0xba6c50] 0xc001eea900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:24:39.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:24:40.010: INFO: rc: 1 Jan 30 13:24:40.011: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00244e0c0 exit status 1 true [0xc000186040 0xc001cb4010 0xc001cb4028] [0xc000186040 0xc001cb4010 0xc001cb4028] [0xc001cb4008 0xc001cb4020] [0xba6c50 0xba6c50] 0xc002b14d80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:24:50.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:24:50.241: INFO: rc: 1 Jan 30 13:24:50.241: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ed60c0 exit status 1 true [0xc00203a008 0xc00203a040 0xc00203a0a0] [0xc00203a008 0xc00203a040 0xc00203a0a0] [0xc00203a038 0xc00203a088] [0xba6c50 0xba6c50] 0xc0022d84e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:25:00.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:25:00.453: INFO: rc: 1 Jan 30 13:25:00.453: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00244e180 exit status 1 true [0xc001cb4030 0xc001cb4048 0xc001cb4060] [0xc001cb4030 0xc001cb4048 0xc001cb4060] [0xc001cb4040 0xc001cb4058] [0xba6c50 0xba6c50] 0xc00274a7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:25:10.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:25:10.729: INFO: rc: 1 Jan 30 13:25:10.729: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ed61e0 exit status 1 true [0xc00203a0b8 0xc00203a100 0xc00203a160] [0xc00203a0b8 0xc00203a100 0xc00203a160] [0xc00203a0f8 0xc00203a140] [0xba6c50 0xba6c50] 0xc0022d9b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:25:20.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:25:20.940: INFO: rc: 1 Jan 30 13:25:20.941: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ed62a0 exit status 1 true [0xc00203a170 0xc00203a190 0xc00203a1c0] [0xc00203a170 0xc00203a190 0xc00203a1c0] [0xc00203a180 0xc00203a1b0] [0xba6c50 0xba6c50] 0xc0026b6a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:25:30.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:25:31.140: INFO: rc: 1 Jan 30 13:25:31.141: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00244e2a0 exit status 1 true [0xc001cb4068 0xc001cb4080 0xc001cb4098] [0xc001cb4068 0xc001cb4080 0xc001cb4098] [0xc001cb4078 0xc001cb4090] [0xba6c50 0xba6c50] 0xc00274b8c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:25:41.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:25:41.326: INFO: rc: 1 Jan 30 13:25:41.327: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ed6360 exit status 1 true [0xc00203a1d0 0xc00203a208 0xc00203a260] [0xc00203a1d0 0xc00203a208 0xc00203a260] [0xc00203a1f0 0xc00203a240] [0xba6c50 0xba6c50] 0xc0026b7560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:25:51.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:25:51.536: INFO: rc: 1 Jan 30 13:25:51.536: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cac0c0 exit status 1 true [0xc002dd4000 0xc002dd4018 0xc002dd4030] [0xc002dd4000 0xc002dd4018 0xc002dd4030] [0xc002dd4010 0xc002dd4028] [0xba6c50 0xba6c50] 0xc002dc3740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:26:01.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:26:01.742: INFO: rc: 1 Jan 30 13:26:01.742: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ed6420 exit status 1 true [0xc00203a270 0xc00203a2b8 0xc00203a2d0] [0xc00203a270 0xc00203a2b8 0xc00203a2d0] [0xc00203a298 0xc00203a2c8] [0xba6c50 0xba6c50] 0xc0026b7e00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:26:11.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:26:11.962: INFO: rc: 1 Jan 30 13:26:11.963: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00244e3f0 exit status 1 true [0xc001cb40a0 0xc001cb40b8 0xc001cb40d0] [0xc001cb40a0 0xc001cb40b8 0xc001cb40d0] [0xc001cb40b0 0xc001cb40c8] [0xba6c50 0xba6c50] 0xc0026d20c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:26:21.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:26:22.120: INFO: rc: 1 Jan 30 13:26:22.121: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ed64e0 exit status 1 true [0xc00203a2f0 0xc00203a328 0xc00203a370] [0xc00203a2f0 0xc00203a328 0xc00203a370] [0xc00203a318 0xc00203a348] [0xba6c50 0xba6c50] 0xc001eea9c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:26:32.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:26:32.299: INFO: rc: 1 Jan 30 13:26:32.300: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00244e4e0 exit status 1 true [0xc001cb40d8 0xc001cb40f0 0xc001cb4108] [0xc001cb40d8 0xc001cb40f0 0xc001cb4108] [0xc001cb40e8 0xc001cb4100] [0xba6c50 0xba6c50] 0xc0026d23c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:26:42.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:26:42.563: INFO: rc: 1 Jan 30 13:26:42.564: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cac090 exit status 1 true [0xc000186040 0xc002dd4010 0xc002dd4028] [0xc000186040 0xc002dd4010 0xc002dd4028] [0xc002dd4008 0xc002dd4020] [0xba6c50 0xba6c50] 0xc0026b6660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 30 13:26:52.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2035 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 13:26:52.707: INFO: rc: 1 Jan 30 13:26:52.708: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jan 30 13:26:52.708: INFO: Scaling statefulset ss to 0 Jan 30 13:26:52.719: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 30 13:26:52.722: INFO: Deleting all statefulset in ns statefulset-2035 Jan 30 13:26:52.724: INFO: Scaling statefulset ss to 0 Jan 30 13:26:52.733: INFO: Waiting for statefulset status.replicas updated to 0 Jan 30 13:26:52.736: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:26:52.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2035" for this suite. Jan 30 13:26:58.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:26:58.914: INFO: namespace statefulset-2035 deletion completed in 6.148425448s • [SLOW TEST:370.935 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:26:58.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-dca0ac1e-a503-4d47-ab5a-6abc3dcad4c1 STEP: Creating a pod to test consume configMaps Jan 30 13:26:59.126: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1fa2bc79-e332-4211-828d-cc4bb0ef7bd5" in namespace "projected-7126" to be "success or failure" Jan 30 13:26:59.175: INFO: Pod "pod-projected-configmaps-1fa2bc79-e332-4211-828d-cc4bb0ef7bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 49.004607ms Jan 30 13:27:01.186: INFO: Pod "pod-projected-configmaps-1fa2bc79-e332-4211-828d-cc4bb0ef7bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06007971s Jan 30 13:27:03.193: INFO: Pod "pod-projected-configmaps-1fa2bc79-e332-4211-828d-cc4bb0ef7bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067189109s Jan 30 13:27:05.200: INFO: Pod "pod-projected-configmaps-1fa2bc79-e332-4211-828d-cc4bb0ef7bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073719723s Jan 30 13:27:07.212: INFO: Pod "pod-projected-configmaps-1fa2bc79-e332-4211-828d-cc4bb0ef7bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085889165s Jan 30 13:27:09.227: INFO: Pod "pod-projected-configmaps-1fa2bc79-e332-4211-828d-cc4bb0ef7bd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100579346s STEP: Saw pod success Jan 30 13:27:09.227: INFO: Pod "pod-projected-configmaps-1fa2bc79-e332-4211-828d-cc4bb0ef7bd5" satisfied condition "success or failure" Jan 30 13:27:09.232: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-1fa2bc79-e332-4211-828d-cc4bb0ef7bd5 container projected-configmap-volume-test: STEP: delete the pod Jan 30 13:27:09.444: INFO: Waiting for pod pod-projected-configmaps-1fa2bc79-e332-4211-828d-cc4bb0ef7bd5 to disappear Jan 30 13:27:09.460: INFO: Pod pod-projected-configmaps-1fa2bc79-e332-4211-828d-cc4bb0ef7bd5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:27:09.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7126" for this suite. Jan 30 13:27:15.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:27:15.827: INFO: namespace projected-7126 deletion completed in 6.356696499s • [SLOW TEST:16.913 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:27:15.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-7924 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7924 STEP: Deleting pre-stop pod Jan 30 13:27:39.046: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:27:39.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7924" for this suite. Jan 30 13:28:17.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:28:17.326: INFO: namespace prestop-7924 deletion completed in 38.207885012s • [SLOW TEST:61.497 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:28:17.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 30 13:28:17.492: INFO: Waiting up to 5m0s for pod "pod-adf6424f-40bc-4d59-8dce-784ab685c017" in namespace "emptydir-1374" to be "success or failure" Jan 30 13:28:17.501: INFO: Pod "pod-adf6424f-40bc-4d59-8dce-784ab685c017": Phase="Pending", Reason="", readiness=false. Elapsed: 8.663261ms Jan 30 13:28:19.511: INFO: Pod "pod-adf6424f-40bc-4d59-8dce-784ab685c017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019237829s Jan 30 13:28:21.522: INFO: Pod "pod-adf6424f-40bc-4d59-8dce-784ab685c017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030177914s Jan 30 13:28:23.534: INFO: Pod "pod-adf6424f-40bc-4d59-8dce-784ab685c017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041788455s Jan 30 13:28:25.544: INFO: Pod "pod-adf6424f-40bc-4d59-8dce-784ab685c017": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051754781s Jan 30 13:28:27.555: INFO: Pod "pod-adf6424f-40bc-4d59-8dce-784ab685c017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062781769s STEP: Saw pod success Jan 30 13:28:27.555: INFO: Pod "pod-adf6424f-40bc-4d59-8dce-784ab685c017" satisfied condition "success or failure" Jan 30 13:28:27.566: INFO: Trying to get logs from node iruya-node pod pod-adf6424f-40bc-4d59-8dce-784ab685c017 container test-container: STEP: delete the pod Jan 30 13:28:27.639: INFO: Waiting for pod pod-adf6424f-40bc-4d59-8dce-784ab685c017 to disappear Jan 30 13:28:27.695: INFO: Pod pod-adf6424f-40bc-4d59-8dce-784ab685c017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:28:27.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1374" for this suite. Jan 30 13:28:33.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:28:33.945: INFO: namespace emptydir-1374 deletion completed in 6.236447069s • [SLOW TEST:16.619 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:28:33.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 30 13:28:43.203: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:28:43.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5012" for this suite. Jan 30 13:28:49.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:28:49.493: INFO: namespace container-runtime-5012 deletion completed in 6.121227404s • [SLOW TEST:15.544 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:28:49.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-9cb4e751-8abe-4fce-80ed-63258d32c492 STEP: Creating a pod to test consume secrets Jan 30 13:28:49.640: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0e7130b6-3cd4-4dc5-a66f-9376a4bea686" in namespace "projected-8779" to be "success or failure" Jan 30 13:28:49.646: INFO: Pod "pod-projected-secrets-0e7130b6-3cd4-4dc5-a66f-9376a4bea686": Phase="Pending", Reason="", readiness=false. Elapsed: 5.204198ms Jan 30 13:28:51.653: INFO: Pod "pod-projected-secrets-0e7130b6-3cd4-4dc5-a66f-9376a4bea686": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012357973s Jan 30 13:28:53.663: INFO: Pod "pod-projected-secrets-0e7130b6-3cd4-4dc5-a66f-9376a4bea686": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022247801s Jan 30 13:28:55.674: INFO: Pod "pod-projected-secrets-0e7130b6-3cd4-4dc5-a66f-9376a4bea686": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033885228s Jan 30 13:28:57.684: INFO: Pod "pod-projected-secrets-0e7130b6-3cd4-4dc5-a66f-9376a4bea686": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043113086s Jan 30 13:28:59.693: INFO: Pod "pod-projected-secrets-0e7130b6-3cd4-4dc5-a66f-9376a4bea686": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.052526722s STEP: Saw pod success Jan 30 13:28:59.693: INFO: Pod "pod-projected-secrets-0e7130b6-3cd4-4dc5-a66f-9376a4bea686" satisfied condition "success or failure" Jan 30 13:28:59.699: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-0e7130b6-3cd4-4dc5-a66f-9376a4bea686 container secret-volume-test: STEP: delete the pod Jan 30 13:28:59.772: INFO: Waiting for pod pod-projected-secrets-0e7130b6-3cd4-4dc5-a66f-9376a4bea686 to disappear Jan 30 13:28:59.779: INFO: Pod pod-projected-secrets-0e7130b6-3cd4-4dc5-a66f-9376a4bea686 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:28:59.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8779" for this suite. Jan 30 13:29:05.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:29:05.973: INFO: namespace projected-8779 deletion completed in 6.184926291s • [SLOW TEST:16.480 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:29:05.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:29:14.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6767" for this suite. Jan 30 13:29:20.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:29:21.070: INFO: namespace emptydir-wrapper-6767 deletion completed in 6.795695065s • [SLOW TEST:15.097 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:29:21.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 30 13:32:23.497: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:23.512: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:25.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:25.525: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:27.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:27.529: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:29.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:29.524: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:31.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:31.524: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:33.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:33.521: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:35.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:35.521: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:37.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:37.521: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:39.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:39.531: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:41.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:41.525: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:43.516: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:43.552: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:45.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:45.521: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:47.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:47.522: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:49.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:49.614: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:51.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:51.530: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:53.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:53.525: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:55.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:55.530: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:57.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:57.520: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:32:59.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:32:59.524: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:01.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:01.524: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:03.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:03.520: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:05.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:05.523: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:07.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:07.523: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:09.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:09.521: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:11.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:11.525: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:13.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:13.523: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:15.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:15.524: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:17.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:17.522: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:19.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:19.522: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:21.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:21.522: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:23.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:23.522: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:25.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:25.523: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:27.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:27.522: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:29.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:29.519: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:31.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:31.574: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:33.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:33.528: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:35.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:35.522: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:37.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:37.522: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:39.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:39.522: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:41.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:41.522: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:43.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:43.521: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:45.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:45.521: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:47.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:47.526: INFO: Pod pod-with-poststart-exec-hook still exists Jan 30 13:33:49.513: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 30 13:33:49.521: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:33:49.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8745" for this suite. Jan 30 13:34:11.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:34:11.724: INFO: namespace container-lifecycle-hook-8745 deletion completed in 22.191859138s • [SLOW TEST:290.653 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:34:11.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-be19438e-fa51-47ea-b83b-31f8de95d96f in namespace container-probe-2824 Jan 30 13:34:21.906: INFO: Started pod busybox-be19438e-fa51-47ea-b83b-31f8de95d96f in namespace container-probe-2824 STEP: checking the pod's current state and verifying that restartCount is present Jan 30 13:34:21.917: INFO: Initial restart count of pod busybox-be19438e-fa51-47ea-b83b-31f8de95d96f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:38:22.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2824" for this suite. Jan 30 13:38:28.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:38:28.409: INFO: namespace container-probe-2824 deletion completed in 6.292295804s • [SLOW TEST:256.684 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:38:28.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Jan 30 13:38:28.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 30 13:38:28.650: INFO: stderr: "" Jan 30 13:38:28.650: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:38:28.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-957" for this suite. Jan 30 13:38:34.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:38:34.847: INFO: namespace kubectl-957 deletion completed in 6.190651964s • [SLOW TEST:6.437 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:38:34.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8691 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 30 13:38:34.921: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 30 13:39:11.113: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-8691 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 13:39:11.114: INFO: >>> kubeConfig: /root/.kube/config I0130 13:39:11.196505 8 log.go:172] (0xc0030184d0) (0xc00219e960) Create stream I0130 13:39:11.196742 8 log.go:172] (0xc0030184d0) (0xc00219e960) Stream added, broadcasting: 1 I0130 13:39:11.207305 8 log.go:172] (0xc0030184d0) Reply frame received for 1 I0130 13:39:11.207390 8 log.go:172] (0xc0030184d0) (0xc000211400) Create stream I0130 13:39:11.207410 8 log.go:172] (0xc0030184d0) (0xc000211400) Stream added, broadcasting: 3 I0130 13:39:11.209955 8 log.go:172] (0xc0030184d0) Reply frame received for 3 I0130 13:39:11.210001 8 log.go:172] (0xc0030184d0) (0xc000211680) Create stream I0130 13:39:11.210015 8 log.go:172] (0xc0030184d0) (0xc000211680) Stream added, broadcasting: 5 I0130 13:39:11.212516 8 log.go:172] (0xc0030184d0) Reply frame received for 5 I0130 13:39:11.396692 8 log.go:172] (0xc0030184d0) Data frame received for 3 I0130 13:39:11.396771 8 log.go:172] (0xc000211400) (3) Data frame handling I0130 13:39:11.396794 8 log.go:172] (0xc000211400) (3) Data frame sent I0130 13:39:11.560661 8 log.go:172] (0xc0030184d0) Data frame received for 1 I0130 13:39:11.560835 8 log.go:172] (0xc00219e960) (1) Data frame handling I0130 13:39:11.560928 8 log.go:172] (0xc00219e960) (1) Data frame sent I0130 13:39:11.561839 8 log.go:172] (0xc0030184d0) (0xc00219e960) Stream removed, broadcasting: 1 I0130 13:39:11.562055 8 log.go:172] (0xc0030184d0) (0xc000211680) Stream removed, broadcasting: 5 I0130 13:39:11.562164 8 log.go:172] (0xc0030184d0) (0xc000211400) Stream removed, broadcasting: 3 I0130 13:39:11.562225 8 log.go:172] (0xc0030184d0) Go away received I0130 13:39:11.562317 8 log.go:172] (0xc0030184d0) (0xc00219e960) Stream removed, broadcasting: 1 I0130 13:39:11.562353 8 log.go:172] (0xc0030184d0) (0xc000211400) Stream removed, broadcasting: 3 I0130 13:39:11.562411 8 log.go:172] (0xc0030184d0) (0xc000211680) Stream removed, broadcasting: 5 Jan 30 13:39:11.562: INFO: Waiting for endpoints: map[] Jan 30 13:39:11.577: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-8691 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 13:39:11.577: INFO: >>> kubeConfig: /root/.kube/config I0130 13:39:11.655151 8 log.go:172] (0xc0007e6a50) (0xc001b34280) Create stream I0130 13:39:11.655467 8 log.go:172] (0xc0007e6a50) (0xc001b34280) Stream added, broadcasting: 1 I0130 13:39:11.667915 8 log.go:172] (0xc0007e6a50) Reply frame received for 1 I0130 13:39:11.668111 8 log.go:172] (0xc0007e6a50) (0xc00219eb40) Create stream I0130 13:39:11.668126 8 log.go:172] (0xc0007e6a50) (0xc00219eb40) Stream added, broadcasting: 3 I0130 13:39:11.670414 8 log.go:172] (0xc0007e6a50) Reply frame received for 3 I0130 13:39:11.670468 8 log.go:172] (0xc0007e6a50) (0xc001136000) Create stream I0130 13:39:11.670487 8 log.go:172] (0xc0007e6a50) (0xc001136000) Stream added, broadcasting: 5 I0130 13:39:11.671893 8 log.go:172] (0xc0007e6a50) Reply frame received for 5 I0130 13:39:11.799234 8 log.go:172] (0xc0007e6a50) Data frame received for 3 I0130 13:39:11.799408 8 log.go:172] (0xc00219eb40) (3) Data frame handling I0130 13:39:11.799574 8 log.go:172] (0xc00219eb40) (3) Data frame sent I0130 13:39:11.938707 8 log.go:172] (0xc0007e6a50) (0xc00219eb40) Stream removed, broadcasting: 3 I0130 13:39:11.938954 8 log.go:172] (0xc0007e6a50) Data frame received for 1 I0130 13:39:11.938999 8 log.go:172] (0xc001b34280) (1) Data frame handling I0130 13:39:11.939008 8 log.go:172] (0xc0007e6a50) (0xc001136000) Stream removed, broadcasting: 5 I0130 13:39:11.939031 8 log.go:172] (0xc001b34280) (1) Data frame sent I0130 13:39:11.939054 8 log.go:172] (0xc0007e6a50) (0xc001b34280) Stream removed, broadcasting: 1 I0130 13:39:11.939084 8 log.go:172] (0xc0007e6a50) Go away received I0130 13:39:11.939489 8 log.go:172] (0xc0007e6a50) (0xc001b34280) Stream removed, broadcasting: 1 I0130 13:39:11.939506 8 log.go:172] (0xc0007e6a50) (0xc00219eb40) Stream removed, broadcasting: 3 I0130 13:39:11.939521 8 log.go:172] (0xc0007e6a50) (0xc001136000) Stream removed, broadcasting: 5 Jan 30 13:39:11.939: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:39:11.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8691" for this suite. Jan 30 13:39:33.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:39:34.119: INFO: namespace pod-network-test-8691 deletion completed in 22.167364135s • [SLOW TEST:59.271 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:39:34.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 30 13:39:34.230: INFO: Waiting up to 5m0s for pod "pod-00da4923-d228-405e-8fa9-53d01cebd96f" in namespace "emptydir-5659" to be "success or failure" Jan 30 13:39:34.243: INFO: Pod "pod-00da4923-d228-405e-8fa9-53d01cebd96f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.55227ms Jan 30 13:39:36.257: INFO: Pod "pod-00da4923-d228-405e-8fa9-53d01cebd96f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026943673s Jan 30 13:39:38.267: INFO: Pod "pod-00da4923-d228-405e-8fa9-53d01cebd96f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037189621s Jan 30 13:39:40.277: INFO: Pod "pod-00da4923-d228-405e-8fa9-53d01cebd96f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046560287s Jan 30 13:39:42.301: INFO: Pod "pod-00da4923-d228-405e-8fa9-53d01cebd96f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070949236s STEP: Saw pod success Jan 30 13:39:42.302: INFO: Pod "pod-00da4923-d228-405e-8fa9-53d01cebd96f" satisfied condition "success or failure" Jan 30 13:39:42.325: INFO: Trying to get logs from node iruya-node pod pod-00da4923-d228-405e-8fa9-53d01cebd96f container test-container: STEP: delete the pod Jan 30 13:39:42.496: INFO: Waiting for pod pod-00da4923-d228-405e-8fa9-53d01cebd96f to disappear Jan 30 13:39:42.504: INFO: Pod pod-00da4923-d228-405e-8fa9-53d01cebd96f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:39:42.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5659" for this suite. Jan 30 13:39:48.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:39:48.667: INFO: namespace emptydir-5659 deletion completed in 6.153694659s • [SLOW TEST:14.548 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:39:48.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jan 30 13:39:48.755: INFO: Waiting up to 5m0s for pod "client-containers-fd5f92b9-0e7d-404b-852a-38368b54c6a4" in namespace "containers-646" to be "success or failure" Jan 30 13:39:48.762: INFO: Pod "client-containers-fd5f92b9-0e7d-404b-852a-38368b54c6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.082664ms Jan 30 13:39:50.777: INFO: Pod "client-containers-fd5f92b9-0e7d-404b-852a-38368b54c6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022168468s Jan 30 13:39:52.791: INFO: Pod "client-containers-fd5f92b9-0e7d-404b-852a-38368b54c6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035996035s Jan 30 13:39:54.801: INFO: Pod "client-containers-fd5f92b9-0e7d-404b-852a-38368b54c6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04613654s Jan 30 13:39:56.820: INFO: Pod "client-containers-fd5f92b9-0e7d-404b-852a-38368b54c6a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064967738s STEP: Saw pod success Jan 30 13:39:56.821: INFO: Pod "client-containers-fd5f92b9-0e7d-404b-852a-38368b54c6a4" satisfied condition "success or failure" Jan 30 13:39:56.828: INFO: Trying to get logs from node iruya-node pod client-containers-fd5f92b9-0e7d-404b-852a-38368b54c6a4 container test-container: STEP: delete the pod Jan 30 13:39:56.927: INFO: Waiting for pod client-containers-fd5f92b9-0e7d-404b-852a-38368b54c6a4 to disappear Jan 30 13:39:56.937: INFO: Pod client-containers-fd5f92b9-0e7d-404b-852a-38368b54c6a4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:39:56.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-646" for this suite. Jan 30 13:40:02.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:40:03.057: INFO: namespace containers-646 deletion completed in 6.114249799s • [SLOW TEST:14.390 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:40:03.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 30 13:40:03.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8560210-4cea-4c30-9d43-e8038430b08b" in namespace "downward-api-3152" to be "success or failure" Jan 30 13:40:03.177: INFO: Pod "downwardapi-volume-f8560210-4cea-4c30-9d43-e8038430b08b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.411441ms Jan 30 13:40:05.189: INFO: Pod "downwardapi-volume-f8560210-4cea-4c30-9d43-e8038430b08b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027926822s Jan 30 13:40:07.196: INFO: Pod "downwardapi-volume-f8560210-4cea-4c30-9d43-e8038430b08b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035145783s Jan 30 13:40:09.205: INFO: Pod "downwardapi-volume-f8560210-4cea-4c30-9d43-e8038430b08b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044648916s Jan 30 13:40:11.215: INFO: Pod "downwardapi-volume-f8560210-4cea-4c30-9d43-e8038430b08b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05427392s Jan 30 13:40:13.226: INFO: Pod "downwardapi-volume-f8560210-4cea-4c30-9d43-e8038430b08b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065189691s STEP: Saw pod success Jan 30 13:40:13.226: INFO: Pod "downwardapi-volume-f8560210-4cea-4c30-9d43-e8038430b08b" satisfied condition "success or failure" Jan 30 13:40:13.230: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f8560210-4cea-4c30-9d43-e8038430b08b container client-container: STEP: delete the pod Jan 30 13:40:13.298: INFO: Waiting for pod downwardapi-volume-f8560210-4cea-4c30-9d43-e8038430b08b to disappear Jan 30 13:40:13.536: INFO: Pod downwardapi-volume-f8560210-4cea-4c30-9d43-e8038430b08b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:40:13.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3152" for this suite. Jan 30 13:40:19.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:40:19.674: INFO: namespace downward-api-3152 deletion completed in 6.129785968s • [SLOW TEST:16.616 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:40:19.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 30 13:40:19.784: INFO: Waiting up to 5m0s for pod "downwardapi-volume-047f0d27-128a-465c-b49f-fd54f0c57e4c" in namespace "projected-3882" to be "success or failure" Jan 30 13:40:19.800: INFO: Pod "downwardapi-volume-047f0d27-128a-465c-b49f-fd54f0c57e4c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.291999ms Jan 30 13:40:21.816: INFO: Pod "downwardapi-volume-047f0d27-128a-465c-b49f-fd54f0c57e4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032277154s Jan 30 13:40:23.845: INFO: Pod "downwardapi-volume-047f0d27-128a-465c-b49f-fd54f0c57e4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061203623s Jan 30 13:40:25.858: INFO: Pod "downwardapi-volume-047f0d27-128a-465c-b49f-fd54f0c57e4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073427129s Jan 30 13:40:27.867: INFO: Pod "downwardapi-volume-047f0d27-128a-465c-b49f-fd54f0c57e4c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082770245s Jan 30 13:40:29.880: INFO: Pod "downwardapi-volume-047f0d27-128a-465c-b49f-fd54f0c57e4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095502693s STEP: Saw pod success Jan 30 13:40:29.880: INFO: Pod "downwardapi-volume-047f0d27-128a-465c-b49f-fd54f0c57e4c" satisfied condition "success or failure" Jan 30 13:40:29.886: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-047f0d27-128a-465c-b49f-fd54f0c57e4c container client-container: STEP: delete the pod Jan 30 13:40:30.044: INFO: Waiting for pod downwardapi-volume-047f0d27-128a-465c-b49f-fd54f0c57e4c to disappear Jan 30 13:40:30.069: INFO: Pod downwardapi-volume-047f0d27-128a-465c-b49f-fd54f0c57e4c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:40:30.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3882" for this suite. Jan 30 13:40:36.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:40:36.293: INFO: namespace projected-3882 deletion completed in 6.204767681s • [SLOW TEST:16.619 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:40:36.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-eb54a6e2-746e-480b-9e06-fb159d66f68b STEP: Creating a pod to test consume secrets Jan 30 13:40:37.100: INFO: Waiting up to 5m0s for pod "pod-secrets-1aff0c99-e3fb-4b23-8848-7f60d7144be5" in namespace "secrets-2165" to be "success or failure" Jan 30 13:40:37.109: INFO: Pod "pod-secrets-1aff0c99-e3fb-4b23-8848-7f60d7144be5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.724016ms Jan 30 13:40:39.189: INFO: Pod "pod-secrets-1aff0c99-e3fb-4b23-8848-7f60d7144be5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088298258s Jan 30 13:40:41.197: INFO: Pod "pod-secrets-1aff0c99-e3fb-4b23-8848-7f60d7144be5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097015975s Jan 30 13:40:43.206: INFO: Pod "pod-secrets-1aff0c99-e3fb-4b23-8848-7f60d7144be5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105299057s Jan 30 13:40:45.230: INFO: Pod "pod-secrets-1aff0c99-e3fb-4b23-8848-7f60d7144be5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130167464s Jan 30 13:40:47.259: INFO: Pod "pod-secrets-1aff0c99-e3fb-4b23-8848-7f60d7144be5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.158310349s STEP: Saw pod success Jan 30 13:40:47.259: INFO: Pod "pod-secrets-1aff0c99-e3fb-4b23-8848-7f60d7144be5" satisfied condition "success or failure" Jan 30 13:40:47.264: INFO: Trying to get logs from node iruya-node pod pod-secrets-1aff0c99-e3fb-4b23-8848-7f60d7144be5 container secret-volume-test: STEP: delete the pod Jan 30 13:40:47.592: INFO: Waiting for pod pod-secrets-1aff0c99-e3fb-4b23-8848-7f60d7144be5 to disappear Jan 30 13:40:47.612: INFO: Pod pod-secrets-1aff0c99-e3fb-4b23-8848-7f60d7144be5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:40:47.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2165" for this suite. Jan 30 13:40:53.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:40:53.776: INFO: namespace secrets-2165 deletion completed in 6.157841678s • [SLOW TEST:17.482 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:40:53.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 30 13:41:14.076: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 30 13:41:14.093: INFO: Pod pod-with-prestop-http-hook still exists Jan 30 13:41:16.093: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 30 13:41:16.105: INFO: Pod pod-with-prestop-http-hook still exists Jan 30 13:41:18.093: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 30 13:41:18.101: INFO: Pod pod-with-prestop-http-hook still exists Jan 30 13:41:20.093: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 30 13:41:20.099: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:41:20.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5494" for this suite. Jan 30 13:41:42.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:41:42.285: INFO: namespace container-lifecycle-hook-5494 deletion completed in 22.152997649s • [SLOW TEST:48.508 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:41:42.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9442.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9442.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9442.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9442.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9442.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9442.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9442.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9442.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9442.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9442.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9442.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9442.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9442.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 132.29.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.29.132_udp@PTR;check="$$(dig +tcp +noall +answer +search 132.29.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.29.132_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9442.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9442.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9442.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9442.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9442.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9442.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9442.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9442.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9442.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9442.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9442.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9442.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9442.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 132.29.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.29.132_udp@PTR;check="$$(dig +tcp +noall +answer +search 132.29.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.29.132_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 30 13:41:54.632: INFO: Unable to read wheezy_udp@dns-test-service.dns-9442.svc.cluster.local from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.644: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9442.svc.cluster.local from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.653: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9442.svc.cluster.local from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.662: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9442.svc.cluster.local from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.668: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-9442.svc.cluster.local from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.678: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-9442.svc.cluster.local from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.694: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.701: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.705: INFO: Unable to read 10.109.29.132_udp@PTR from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.709: INFO: Unable to read 10.109.29.132_tcp@PTR from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.714: INFO: Unable to read jessie_udp@dns-test-service.dns-9442.svc.cluster.local from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.719: INFO: Unable to read jessie_tcp@dns-test-service.dns-9442.svc.cluster.local from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.722: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9442.svc.cluster.local from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.726: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9442.svc.cluster.local from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.730: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-9442.svc.cluster.local from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.737: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-9442.svc.cluster.local from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.740: INFO: Unable to read jessie_udp@PodARecord from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.743: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.747: INFO: Unable to read 10.109.29.132_udp@PTR from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.750: INFO: Unable to read 10.109.29.132_tcp@PTR from pod dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4: the server could not find the requested resource (get pods dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4) Jan 30 13:41:54.750: INFO: Lookups using dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4 failed for: [wheezy_udp@dns-test-service.dns-9442.svc.cluster.local wheezy_tcp@dns-test-service.dns-9442.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9442.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9442.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-9442.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-9442.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.109.29.132_udp@PTR 10.109.29.132_tcp@PTR jessie_udp@dns-test-service.dns-9442.svc.cluster.local jessie_tcp@dns-test-service.dns-9442.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9442.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9442.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-9442.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-9442.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.109.29.132_udp@PTR 10.109.29.132_tcp@PTR] Jan 30 13:41:59.922: INFO: DNS probes using dns-9442/dns-test-e30b6523-576a-4fcf-b717-ea704e709ee4 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:42:00.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9442" for this suite. Jan 30 13:42:06.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:42:06.449: INFO: namespace dns-9442 deletion completed in 6.250524672s • [SLOW TEST:24.163 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:42:06.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 30 13:42:06.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9663' Jan 30 13:42:08.724: INFO: stderr: "" Jan 30 13:42:08.724: INFO: stdout: "replicationcontroller/redis-master created\n" Jan 30 13:42:08.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9663' Jan 30 13:42:09.459: INFO: stderr: "" Jan 30 13:42:09.459: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jan 30 13:42:10.480: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:42:10.480: INFO: Found 0 / 1 Jan 30 13:42:11.483: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:42:11.484: INFO: Found 0 / 1 Jan 30 13:42:12.473: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:42:12.473: INFO: Found 0 / 1 Jan 30 13:42:13.471: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:42:13.471: INFO: Found 0 / 1 Jan 30 13:42:14.474: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:42:14.474: INFO: Found 0 / 1 Jan 30 13:42:15.477: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:42:15.477: INFO: Found 0 / 1 Jan 30 13:42:16.473: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:42:16.473: INFO: Found 1 / 1 Jan 30 13:42:16.473: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 30 13:42:16.479: INFO: Selector matched 1 pods for map[app:redis] Jan 30 13:42:16.479: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 30 13:42:16.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-jkzs8 --namespace=kubectl-9663' Jan 30 13:42:16.659: INFO: stderr: "" Jan 30 13:42:16.659: INFO: stdout: "Name: redis-master-jkzs8\nNamespace: kubectl-9663\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Thu, 30 Jan 2020 13:42:08 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://787a601efd72eec7dd9b98164ce20bd5317e80ea1dda988e4edd07b5f44c78c8\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 30 Jan 2020 13:42:15 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-dlwhw (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-dlwhw:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-dlwhw\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s default-scheduler Successfully assigned kubectl-9663/redis-master-jkzs8 to iruya-node\n Normal Pulled 4s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-node Created container redis-master\n Normal Started 1s kubelet, iruya-node Started container redis-master\n" Jan 30 13:42:16.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9663' Jan 30 13:42:16.855: INFO: stderr: "" Jan 30 13:42:16.855: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9663\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: redis-master-jkzs8\n" Jan 30 13:42:16.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9663' Jan 30 13:42:17.077: INFO: stderr: "" Jan 30 13:42:17.078: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9663\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.92.122\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Jan 30 13:42:17.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Jan 30 13:42:17.210: INFO: stderr: "" Jan 30 13:42:17.210: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Thu, 30 Jan 2020 13:41:27 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 30 Jan 2020 13:41:27 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 30 Jan 2020 13:41:27 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 30 Jan 2020 13:41:27 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 179d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 110d\n kubectl-9663 redis-master-jkzs8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 30 13:42:17.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9663' Jan 30 13:42:17.330: INFO: stderr: "" Jan 30 13:42:17.330: INFO: stdout: "Name: kubectl-9663\nLabels: e2e-framework=kubectl\n e2e-run=2ec793b6-f568-4dd1-b59d-699706adfadf\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:42:17.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9663" for this suite. Jan 30 13:42:39.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:42:39.509: INFO: namespace kubectl-9663 deletion completed in 22.168195476s • [SLOW TEST:33.059 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:42:39.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jan 30 13:42:39.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5436' Jan 30 13:42:40.280: INFO: stderr: "" Jan 30 13:42:40.280: INFO: stdout: "pod/pause created\n" Jan 30 13:42:40.280: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 30 13:42:40.281: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5436" to be "running and ready" Jan 30 13:42:40.286: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.470184ms Jan 30 13:42:42.304: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022997999s Jan 30 13:42:44.315: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03471673s Jan 30 13:42:46.352: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071205327s Jan 30 13:42:48.365: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.084804149s Jan 30 13:42:48.366: INFO: Pod "pause" satisfied condition "running and ready" Jan 30 13:42:48.366: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jan 30 13:42:48.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5436' Jan 30 13:42:48.594: INFO: stderr: "" Jan 30 13:42:48.595: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 30 13:42:48.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5436' Jan 30 13:42:48.722: INFO: stderr: "" Jan 30 13:42:48.722: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 30 13:42:48.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5436' Jan 30 13:42:48.842: INFO: stderr: "" Jan 30 13:42:48.842: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 30 13:42:48.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5436' Jan 30 13:42:48.951: INFO: stderr: "" Jan 30 13:42:48.952: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jan 30 13:42:48.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5436' Jan 30 13:42:49.087: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 30 13:42:49.088: INFO: stdout: "pod \"pause\" force deleted\n" Jan 30 13:42:49.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5436' Jan 30 13:42:49.307: INFO: stderr: "No resources found.\n" Jan 30 13:42:49.308: INFO: stdout: "" Jan 30 13:42:49.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5436 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 30 13:42:49.510: INFO: stderr: "" Jan 30 13:42:49.510: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:42:49.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5436" for this suite. Jan 30 13:42:55.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:42:55.701: INFO: namespace kubectl-5436 deletion completed in 6.178614096s • [SLOW TEST:16.191 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:42:55.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jan 30 13:42:55.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1522 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 30 13:43:05.717: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0130 13:43:04.279909 2747 log.go:172] (0xc000aa42c0) (0xc00065ea00) Create stream\nI0130 13:43:04.280445 2747 log.go:172] (0xc000aa42c0) (0xc00065ea00) Stream added, broadcasting: 1\nI0130 13:43:04.291859 2747 log.go:172] (0xc000aa42c0) Reply frame received for 1\nI0130 13:43:04.292100 2747 log.go:172] (0xc000aa42c0) (0xc000a505a0) Create stream\nI0130 13:43:04.292130 2747 log.go:172] (0xc000aa42c0) (0xc000a505a0) Stream added, broadcasting: 3\nI0130 13:43:04.300382 2747 log.go:172] (0xc000aa42c0) Reply frame received for 3\nI0130 13:43:04.300662 2747 log.go:172] (0xc000aa42c0) (0xc0003f0000) Create stream\nI0130 13:43:04.300683 2747 log.go:172] (0xc000aa42c0) (0xc0003f0000) Stream added, broadcasting: 5\nI0130 13:43:04.304142 2747 log.go:172] (0xc000aa42c0) Reply frame received for 5\nI0130 13:43:04.304302 2747 log.go:172] (0xc000aa42c0) (0xc000a50640) Create stream\nI0130 13:43:04.304329 2747 log.go:172] (0xc000aa42c0) (0xc000a50640) Stream added, broadcasting: 7\nI0130 13:43:04.306521 2747 log.go:172] (0xc000aa42c0) Reply frame received for 7\nI0130 13:43:04.307421 2747 log.go:172] (0xc000a505a0) (3) Writing data frame\nI0130 13:43:04.307897 2747 log.go:172] (0xc000a505a0) (3) Writing data frame\nI0130 13:43:04.322416 2747 log.go:172] (0xc000aa42c0) Data frame received for 5\nI0130 13:43:04.322506 2747 log.go:172] (0xc0003f0000) (5) Data frame handling\nI0130 13:43:04.322571 2747 log.go:172] (0xc0003f0000) (5) Data frame sent\nI0130 13:43:04.325164 2747 log.go:172] (0xc000aa42c0) Data frame received for 5\nI0130 13:43:04.325178 2747 log.go:172] (0xc0003f0000) (5) Data frame handling\nI0130 13:43:04.325188 2747 log.go:172] (0xc0003f0000) (5) Data frame sent\nI0130 13:43:05.665463 2747 log.go:172] (0xc000aa42c0) Data frame received for 1\nI0130 13:43:05.665624 2747 log.go:172] (0xc000aa42c0) (0xc000a50640) Stream removed, broadcasting: 7\nI0130 13:43:05.665712 2747 log.go:172] (0xc00065ea00) (1) Data frame handling\nI0130 13:43:05.665739 2747 log.go:172] (0xc00065ea00) (1) Data frame sent\nI0130 13:43:05.666044 2747 log.go:172] (0xc000aa42c0) (0xc000a505a0) Stream removed, broadcasting: 3\nI0130 13:43:05.666104 2747 log.go:172] (0xc000aa42c0) (0xc00065ea00) Stream removed, broadcasting: 1\nI0130 13:43:05.666309 2747 log.go:172] (0xc000aa42c0) (0xc0003f0000) Stream removed, broadcasting: 5\nI0130 13:43:05.666344 2747 log.go:172] (0xc000aa42c0) Go away received\nI0130 13:43:05.666770 2747 log.go:172] (0xc000aa42c0) (0xc00065ea00) Stream removed, broadcasting: 1\nI0130 13:43:05.666810 2747 log.go:172] (0xc000aa42c0) (0xc000a505a0) Stream removed, broadcasting: 3\nI0130 13:43:05.666828 2747 log.go:172] (0xc000aa42c0) (0xc0003f0000) Stream removed, broadcasting: 5\nI0130 13:43:05.666842 2747 log.go:172] (0xc000aa42c0) (0xc000a50640) Stream removed, broadcasting: 7\n" Jan 30 13:43:05.718: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:43:07.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1522" for this suite. Jan 30 13:43:13.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:43:13.977: INFO: namespace kubectl-1522 deletion completed in 6.165470075s • [SLOW TEST:18.276 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:43:13.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jan 30 13:43:26.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-34a50723-e303-4f80-a0eb-5af40f58e691 -c busybox-main-container --namespace=emptydir-2688 -- cat /usr/share/volumeshare/shareddata.txt' Jan 30 13:43:26.941: INFO: stderr: "I0130 13:43:26.409152 2769 log.go:172] (0xc000866420) (0xc0005a6a00) Create stream\nI0130 13:43:26.409395 2769 log.go:172] (0xc000866420) (0xc0005a6a00) Stream added, broadcasting: 1\nI0130 13:43:26.416832 2769 log.go:172] (0xc000866420) Reply frame received for 1\nI0130 13:43:26.416893 2769 log.go:172] (0xc000866420) (0xc000772000) Create stream\nI0130 13:43:26.416921 2769 log.go:172] (0xc000866420) (0xc000772000) Stream added, broadcasting: 3\nI0130 13:43:26.418960 2769 log.go:172] (0xc000866420) Reply frame received for 3\nI0130 13:43:26.419021 2769 log.go:172] (0xc000866420) (0xc0009a2000) Create stream\nI0130 13:43:26.419053 2769 log.go:172] (0xc000866420) (0xc0009a2000) Stream added, broadcasting: 5\nI0130 13:43:26.420933 2769 log.go:172] (0xc000866420) Reply frame received for 5\nI0130 13:43:26.745209 2769 log.go:172] (0xc000866420) Data frame received for 3\nI0130 13:43:26.745389 2769 log.go:172] (0xc000772000) (3) Data frame handling\nI0130 13:43:26.745426 2769 log.go:172] (0xc000772000) (3) Data frame sent\nI0130 13:43:26.925564 2769 log.go:172] (0xc000866420) Data frame received for 1\nI0130 13:43:26.925968 2769 log.go:172] (0xc0005a6a00) (1) Data frame handling\nI0130 13:43:26.926057 2769 log.go:172] (0xc0005a6a00) (1) Data frame sent\nI0130 13:43:26.926168 2769 log.go:172] (0xc000866420) (0xc000772000) Stream removed, broadcasting: 3\nI0130 13:43:26.926352 2769 log.go:172] (0xc000866420) (0xc0009a2000) Stream removed, broadcasting: 5\nI0130 13:43:26.926417 2769 log.go:172] (0xc000866420) (0xc0005a6a00) Stream removed, broadcasting: 1\nI0130 13:43:26.926472 2769 log.go:172] (0xc000866420) Go away received\nI0130 13:43:26.928416 2769 log.go:172] (0xc000866420) (0xc0005a6a00) Stream removed, broadcasting: 1\nI0130 13:43:26.928462 2769 log.go:172] (0xc000866420) (0xc000772000) Stream removed, broadcasting: 3\nI0130 13:43:26.928491 2769 log.go:172] (0xc000866420) (0xc0009a2000) Stream removed, broadcasting: 5\n" Jan 30 13:43:26.942: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:43:26.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2688" for this suite. Jan 30 13:43:32.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:43:33.217: INFO: namespace emptydir-2688 deletion completed in 6.265223536s • [SLOW TEST:19.237 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:43:33.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0130 13:44:03.959391 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 30 13:44:03.959: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:44:03.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4590" for this suite. Jan 30 13:44:09.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:44:10.100: INFO: namespace gc-4590 deletion completed in 6.137108837s • [SLOW TEST:36.882 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:44:10.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 30 13:44:11.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-4428' Jan 30 13:44:12.010: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 30 13:44:12.010: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jan 30 13:44:14.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4428' Jan 30 13:44:14.777: INFO: stderr: "" Jan 30 13:44:14.778: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:44:14.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4428" for this suite. Jan 30 13:44:20.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:44:21.032: INFO: namespace kubectl-4428 deletion completed in 6.188057833s • [SLOW TEST:10.932 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:44:21.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 30 13:44:21.151: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 30 13:44:21.180: INFO: Number of nodes with available pods: 0 Jan 30 13:44:21.180: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 30 13:44:21.283: INFO: Number of nodes with available pods: 0 Jan 30 13:44:21.283: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:22.288: INFO: Number of nodes with available pods: 0 Jan 30 13:44:22.288: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:23.295: INFO: Number of nodes with available pods: 0 Jan 30 13:44:23.295: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:24.296: INFO: Number of nodes with available pods: 0 Jan 30 13:44:24.296: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:25.292: INFO: Number of nodes with available pods: 0 Jan 30 13:44:25.292: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:26.297: INFO: Number of nodes with available pods: 0 Jan 30 13:44:26.298: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:27.293: INFO: Number of nodes with available pods: 0 Jan 30 13:44:27.293: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:28.293: INFO: Number of nodes with available pods: 0 Jan 30 13:44:28.293: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:29.295: INFO: Number of nodes with available pods: 1 Jan 30 13:44:29.295: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 30 13:44:29.390: INFO: Number of nodes with available pods: 1 Jan 30 13:44:29.390: INFO: Number of running nodes: 0, number of available pods: 1 Jan 30 13:44:30.408: INFO: Number of nodes with available pods: 0 Jan 30 13:44:30.408: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 30 13:44:30.477: INFO: Number of nodes with available pods: 0 Jan 30 13:44:30.477: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:31.487: INFO: Number of nodes with available pods: 0 Jan 30 13:44:31.487: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:32.490: INFO: Number of nodes with available pods: 0 Jan 30 13:44:32.491: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:33.486: INFO: Number of nodes with available pods: 0 Jan 30 13:44:33.487: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:34.490: INFO: Number of nodes with available pods: 0 Jan 30 13:44:34.490: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:35.513: INFO: Number of nodes with available pods: 0 Jan 30 13:44:35.513: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:36.494: INFO: Number of nodes with available pods: 0 Jan 30 13:44:36.494: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:37.489: INFO: Number of nodes with available pods: 0 Jan 30 13:44:37.489: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:38.494: INFO: Number of nodes with available pods: 0 Jan 30 13:44:38.494: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:39.486: INFO: Number of nodes with available pods: 0 Jan 30 13:44:39.487: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:40.490: INFO: Number of nodes with available pods: 0 Jan 30 13:44:40.490: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:41.486: INFO: Number of nodes with available pods: 0 Jan 30 13:44:41.486: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:42.488: INFO: Number of nodes with available pods: 0 Jan 30 13:44:42.489: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:43.491: INFO: Number of nodes with available pods: 0 Jan 30 13:44:43.491: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:44.494: INFO: Number of nodes with available pods: 0 Jan 30 13:44:44.494: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:45.487: INFO: Number of nodes with available pods: 0 Jan 30 13:44:45.487: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:46.525: INFO: Number of nodes with available pods: 0 Jan 30 13:44:46.525: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:47.491: INFO: Number of nodes with available pods: 0 Jan 30 13:44:47.492: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:48.498: INFO: Number of nodes with available pods: 0 Jan 30 13:44:48.498: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:49.489: INFO: Number of nodes with available pods: 0 Jan 30 13:44:49.489: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:50.488: INFO: Number of nodes with available pods: 0 Jan 30 13:44:50.488: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:51.488: INFO: Number of nodes with available pods: 0 Jan 30 13:44:51.489: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:52.489: INFO: Number of nodes with available pods: 0 Jan 30 13:44:52.490: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:53.487: INFO: Number of nodes with available pods: 0 Jan 30 13:44:53.487: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:54.489: INFO: Number of nodes with available pods: 0 Jan 30 13:44:54.490: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:55.488: INFO: Number of nodes with available pods: 0 Jan 30 13:44:55.488: INFO: Node iruya-node is running more than one daemon pod Jan 30 13:44:56.496: INFO: Number of nodes with available pods: 1 Jan 30 13:44:56.496: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5450, will wait for the garbage collector to delete the pods Jan 30 13:44:56.576: INFO: Deleting DaemonSet.extensions daemon-set took: 12.392695ms Jan 30 13:44:56.977: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.640066ms Jan 30 13:45:06.590: INFO: Number of nodes with available pods: 0 Jan 30 13:45:06.591: INFO: Number of running nodes: 0, number of available pods: 0 Jan 30 13:45:06.595: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5450/daemonsets","resourceVersion":"22442782"},"items":null} Jan 30 13:45:06.598: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5450/pods","resourceVersion":"22442782"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:45:06.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5450" for this suite. Jan 30 13:45:12.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:45:12.813: INFO: namespace daemonsets-5450 deletion completed in 6.15940775s • [SLOW TEST:51.781 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:45:12.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-qhhw STEP: Creating a pod to test atomic-volume-subpath Jan 30 13:45:12.912: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qhhw" in namespace "subpath-3264" to be "success or failure" Jan 30 13:45:12.921: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.313893ms Jan 30 13:45:14.930: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017773206s Jan 30 13:45:16.952: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039563336s Jan 30 13:45:18.967: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054797265s Jan 30 13:45:20.977: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064426653s Jan 30 13:45:22.985: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Running", Reason="", readiness=true. Elapsed: 10.072817206s Jan 30 13:45:25.012: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Running", Reason="", readiness=true. Elapsed: 12.099354253s Jan 30 13:45:27.029: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Running", Reason="", readiness=true. Elapsed: 14.116389835s Jan 30 13:45:29.047: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Running", Reason="", readiness=true. Elapsed: 16.134390389s Jan 30 13:45:31.058: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Running", Reason="", readiness=true. Elapsed: 18.145376546s Jan 30 13:45:33.070: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Running", Reason="", readiness=true. Elapsed: 20.157547803s Jan 30 13:45:35.081: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Running", Reason="", readiness=true. Elapsed: 22.1683181s Jan 30 13:45:37.090: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Running", Reason="", readiness=true. Elapsed: 24.177858759s Jan 30 13:45:39.104: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Running", Reason="", readiness=true. Elapsed: 26.191410889s Jan 30 13:45:41.113: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Running", Reason="", readiness=true. Elapsed: 28.200932747s Jan 30 13:45:43.121: INFO: Pod "pod-subpath-test-configmap-qhhw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.208314917s STEP: Saw pod success Jan 30 13:45:43.121: INFO: Pod "pod-subpath-test-configmap-qhhw" satisfied condition "success or failure" Jan 30 13:45:43.125: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-qhhw container test-container-subpath-configmap-qhhw: STEP: delete the pod Jan 30 13:45:43.287: INFO: Waiting for pod pod-subpath-test-configmap-qhhw to disappear Jan 30 13:45:43.291: INFO: Pod pod-subpath-test-configmap-qhhw no longer exists STEP: Deleting pod pod-subpath-test-configmap-qhhw Jan 30 13:45:43.292: INFO: Deleting pod "pod-subpath-test-configmap-qhhw" in namespace "subpath-3264" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:45:43.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3264" for this suite. Jan 30 13:45:49.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:45:49.631: INFO: namespace subpath-3264 deletion completed in 6.326544095s • [SLOW TEST:36.818 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:45:49.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-78f22867-e399-4f4d-9bee-0a11d0c09381 STEP: Creating secret with name s-test-opt-upd-93442ca6-b783-444c-9ba5-8c69659fe2ab STEP: Creating the pod STEP: Deleting secret s-test-opt-del-78f22867-e399-4f4d-9bee-0a11d0c09381 STEP: Updating secret s-test-opt-upd-93442ca6-b783-444c-9ba5-8c69659fe2ab STEP: Creating secret with name s-test-opt-create-17c5525b-51d8-4665-b339-698958e125c3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:47:18.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4936" for this suite. Jan 30 13:47:42.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:47:42.891: INFO: namespace projected-4936 deletion completed in 24.233507751s • [SLOW TEST:113.259 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:47:42.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 30 13:47:43.049: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:47:56.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2997" for this suite. Jan 30 13:48:02.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:48:02.471: INFO: namespace init-container-2997 deletion completed in 6.26317773s • [SLOW TEST:19.579 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:48:02.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-dbd0bec4-6aed-4b9d-801f-e29da2e9567d STEP: Creating a pod to test consume configMaps Jan 30 13:48:02.569: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9d06297f-753b-460c-ac43-1b6355bd4c6b" in namespace "projected-1297" to be "success or failure" Jan 30 13:48:02.578: INFO: Pod "pod-projected-configmaps-9d06297f-753b-460c-ac43-1b6355bd4c6b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.636463ms Jan 30 13:48:04.594: INFO: Pod "pod-projected-configmaps-9d06297f-753b-460c-ac43-1b6355bd4c6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024110764s Jan 30 13:48:06.610: INFO: Pod "pod-projected-configmaps-9d06297f-753b-460c-ac43-1b6355bd4c6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040677248s Jan 30 13:48:08.624: INFO: Pod "pod-projected-configmaps-9d06297f-753b-460c-ac43-1b6355bd4c6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054572499s Jan 30 13:48:10.631: INFO: Pod "pod-projected-configmaps-9d06297f-753b-460c-ac43-1b6355bd4c6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062025435s STEP: Saw pod success Jan 30 13:48:10.632: INFO: Pod "pod-projected-configmaps-9d06297f-753b-460c-ac43-1b6355bd4c6b" satisfied condition "success or failure" Jan 30 13:48:10.634: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-9d06297f-753b-460c-ac43-1b6355bd4c6b container projected-configmap-volume-test: STEP: delete the pod Jan 30 13:48:10.782: INFO: Waiting for pod pod-projected-configmaps-9d06297f-753b-460c-ac43-1b6355bd4c6b to disappear Jan 30 13:48:10.795: INFO: Pod pod-projected-configmaps-9d06297f-753b-460c-ac43-1b6355bd4c6b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:48:10.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1297" for this suite. Jan 30 13:48:16.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:48:17.017: INFO: namespace projected-1297 deletion completed in 6.212776758s • [SLOW TEST:14.546 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:48:17.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8c495235-3e59-4ee2-95ad-c3f01f674464 STEP: Creating a pod to test consume secrets Jan 30 13:48:17.301: INFO: Waiting up to 5m0s for pod "pod-secrets-717520f9-f608-4e8d-b453-51ae43e56540" in namespace "secrets-6596" to be "success or failure" Jan 30 13:48:17.326: INFO: Pod "pod-secrets-717520f9-f608-4e8d-b453-51ae43e56540": Phase="Pending", Reason="", readiness=false. Elapsed: 24.86442ms Jan 30 13:48:19.362: INFO: Pod "pod-secrets-717520f9-f608-4e8d-b453-51ae43e56540": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060624013s Jan 30 13:48:21.378: INFO: Pod "pod-secrets-717520f9-f608-4e8d-b453-51ae43e56540": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076355629s Jan 30 13:48:23.388: INFO: Pod "pod-secrets-717520f9-f608-4e8d-b453-51ae43e56540": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086719577s Jan 30 13:48:25.398: INFO: Pod "pod-secrets-717520f9-f608-4e8d-b453-51ae43e56540": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096668214s Jan 30 13:48:27.406: INFO: Pod "pod-secrets-717520f9-f608-4e8d-b453-51ae43e56540": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.10398242s STEP: Saw pod success Jan 30 13:48:27.406: INFO: Pod "pod-secrets-717520f9-f608-4e8d-b453-51ae43e56540" satisfied condition "success or failure" Jan 30 13:48:27.410: INFO: Trying to get logs from node iruya-node pod pod-secrets-717520f9-f608-4e8d-b453-51ae43e56540 container secret-volume-test: STEP: delete the pod Jan 30 13:48:27.462: INFO: Waiting for pod pod-secrets-717520f9-f608-4e8d-b453-51ae43e56540 to disappear Jan 30 13:48:27.551: INFO: Pod pod-secrets-717520f9-f608-4e8d-b453-51ae43e56540 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:48:27.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6596" for this suite. Jan 30 13:48:33.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:48:33.753: INFO: namespace secrets-6596 deletion completed in 6.197369486s STEP: Destroying namespace "secret-namespace-9024" for this suite. Jan 30 13:48:39.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:48:39.940: INFO: namespace secret-namespace-9024 deletion completed in 6.187494834s • [SLOW TEST:22.922 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:48:39.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-92021dd0-3899-4710-b7ce-7fc2a40a1734 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-92021dd0-3899-4710-b7ce-7fc2a40a1734 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:48:50.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3706" for this suite. Jan 30 13:49:12.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:49:12.835: INFO: namespace projected-3706 deletion completed in 22.191866299s • [SLOW TEST:32.894 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:49:12.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0130 13:49:53.267412 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 30 13:49:53.267: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:49:53.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5152" for this suite. Jan 30 13:50:11.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:50:11.521: INFO: namespace gc-5152 deletion completed in 18.248684997s • [SLOW TEST:58.686 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:50:11.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 30 13:50:11.752: INFO: Waiting up to 5m0s for pod "downward-api-50c639fc-ba6f-4188-abb0-b17b258c1b3e" in namespace "downward-api-2104" to be "success or failure" Jan 30 13:50:11.799: INFO: Pod "downward-api-50c639fc-ba6f-4188-abb0-b17b258c1b3e": Phase="Pending", Reason="", readiness=false. Elapsed: 46.871564ms Jan 30 13:50:13.829: INFO: Pod "downward-api-50c639fc-ba6f-4188-abb0-b17b258c1b3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076578434s Jan 30 13:50:15.852: INFO: Pod "downward-api-50c639fc-ba6f-4188-abb0-b17b258c1b3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099267981s Jan 30 13:50:17.862: INFO: Pod "downward-api-50c639fc-ba6f-4188-abb0-b17b258c1b3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109500368s Jan 30 13:50:19.879: INFO: Pod "downward-api-50c639fc-ba6f-4188-abb0-b17b258c1b3e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126440478s Jan 30 13:50:21.898: INFO: Pod "downward-api-50c639fc-ba6f-4188-abb0-b17b258c1b3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.145422431s STEP: Saw pod success Jan 30 13:50:21.898: INFO: Pod "downward-api-50c639fc-ba6f-4188-abb0-b17b258c1b3e" satisfied condition "success or failure" Jan 30 13:50:21.905: INFO: Trying to get logs from node iruya-node pod downward-api-50c639fc-ba6f-4188-abb0-b17b258c1b3e container dapi-container: STEP: delete the pod Jan 30 13:50:22.034: INFO: Waiting for pod downward-api-50c639fc-ba6f-4188-abb0-b17b258c1b3e to disappear Jan 30 13:50:22.040: INFO: Pod downward-api-50c639fc-ba6f-4188-abb0-b17b258c1b3e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:50:22.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2104" for this suite. Jan 30 13:50:28.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:50:28.288: INFO: namespace downward-api-2104 deletion completed in 6.228767021s • [SLOW TEST:16.766 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:50:28.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 30 13:50:28.435: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10210ff3-d702-4579-8a53-f85e29c59331" in namespace "downward-api-942" to be "success or failure" Jan 30 13:50:28.476: INFO: Pod "downwardapi-volume-10210ff3-d702-4579-8a53-f85e29c59331": Phase="Pending", Reason="", readiness=false. Elapsed: 40.877935ms Jan 30 13:50:30.489: INFO: Pod "downwardapi-volume-10210ff3-d702-4579-8a53-f85e29c59331": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053077011s Jan 30 13:50:32.500: INFO: Pod "downwardapi-volume-10210ff3-d702-4579-8a53-f85e29c59331": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064355653s Jan 30 13:50:34.522: INFO: Pod "downwardapi-volume-10210ff3-d702-4579-8a53-f85e29c59331": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086900592s Jan 30 13:50:36.536: INFO: Pod "downwardapi-volume-10210ff3-d702-4579-8a53-f85e29c59331": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100927922s Jan 30 13:50:38.575: INFO: Pod "downwardapi-volume-10210ff3-d702-4579-8a53-f85e29c59331": Phase="Pending", Reason="", readiness=false. Elapsed: 10.139244506s Jan 30 13:50:40.911: INFO: Pod "downwardapi-volume-10210ff3-d702-4579-8a53-f85e29c59331": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.47496333s STEP: Saw pod success Jan 30 13:50:40.911: INFO: Pod "downwardapi-volume-10210ff3-d702-4579-8a53-f85e29c59331" satisfied condition "success or failure" Jan 30 13:50:40.918: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-10210ff3-d702-4579-8a53-f85e29c59331 container client-container: STEP: delete the pod Jan 30 13:50:40.996: INFO: Waiting for pod downwardapi-volume-10210ff3-d702-4579-8a53-f85e29c59331 to disappear Jan 30 13:50:41.141: INFO: Pod downwardapi-volume-10210ff3-d702-4579-8a53-f85e29c59331 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:50:41.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-942" for this suite. Jan 30 13:50:47.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:50:47.368: INFO: namespace downward-api-942 deletion completed in 6.220344063s • [SLOW TEST:19.080 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:50:47.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-036cc7a4-c47e-4232-b4bb-66ef90bf75f2 STEP: Creating a pod to test consume secrets Jan 30 13:50:47.467: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-68fe7f3f-cf54-4fc0-a635-1620caca5fd7" in namespace "projected-9570" to be "success or failure" Jan 30 13:50:47.475: INFO: Pod "pod-projected-secrets-68fe7f3f-cf54-4fc0-a635-1620caca5fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.756369ms Jan 30 13:50:49.487: INFO: Pod "pod-projected-secrets-68fe7f3f-cf54-4fc0-a635-1620caca5fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019448671s Jan 30 13:50:51.504: INFO: Pod "pod-projected-secrets-68fe7f3f-cf54-4fc0-a635-1620caca5fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03623526s Jan 30 13:50:53.513: INFO: Pod "pod-projected-secrets-68fe7f3f-cf54-4fc0-a635-1620caca5fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045280135s Jan 30 13:50:55.521: INFO: Pod "pod-projected-secrets-68fe7f3f-cf54-4fc0-a635-1620caca5fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053514955s Jan 30 13:50:57.535: INFO: Pod "pod-projected-secrets-68fe7f3f-cf54-4fc0-a635-1620caca5fd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067236507s STEP: Saw pod success Jan 30 13:50:57.535: INFO: Pod "pod-projected-secrets-68fe7f3f-cf54-4fc0-a635-1620caca5fd7" satisfied condition "success or failure" Jan 30 13:50:57.539: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-68fe7f3f-cf54-4fc0-a635-1620caca5fd7 container projected-secret-volume-test: STEP: delete the pod Jan 30 13:50:57.610: INFO: Waiting for pod pod-projected-secrets-68fe7f3f-cf54-4fc0-a635-1620caca5fd7 to disappear Jan 30 13:50:57.617: INFO: Pod pod-projected-secrets-68fe7f3f-cf54-4fc0-a635-1620caca5fd7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:50:57.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9570" for this suite. Jan 30 13:51:03.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:51:03.885: INFO: namespace projected-9570 deletion completed in 6.260862548s • [SLOW TEST:16.517 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:51:03.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 30 13:51:04.032: INFO: Creating deployment "nginx-deployment" Jan 30 13:51:04.044: INFO: Waiting for observed generation 1 Jan 30 13:51:07.909: INFO: Waiting for all required pods to come up Jan 30 13:51:08.237: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 30 13:51:32.547: INFO: Waiting for deployment "nginx-deployment" to complete Jan 30 13:51:32.556: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 30 13:51:32.568: INFO: Updating deployment nginx-deployment Jan 30 13:51:32.568: INFO: Waiting for observed generation 2 Jan 30 13:51:35.071: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 30 13:51:35.136: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 30 13:51:36.768: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 30 13:51:37.666: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 30 13:51:37.666: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 30 13:51:37.833: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 30 13:51:37.848: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 30 13:51:37.848: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 30 13:51:37.869: INFO: Updating deployment nginx-deployment Jan 30 13:51:37.869: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 30 13:51:37.921: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 30 13:51:38.492: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 30 13:51:39.131: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-4964,SelfLink:/apis/apps/v1/namespaces/deployment-4964/deployments/nginx-deployment,UID:857678c9-c3f3-496b-a01e-0e9baa7e1cd7,ResourceVersion:22443939,Generation:3,CreationTimestamp:2020-01-30 13:51:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-30 13:51:36 +0000 UTC 2020-01-30 13:51:04 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-01-30 13:51:38 +0000 UTC 2020-01-30 13:51:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 30 13:51:39.596: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-4964,SelfLink:/apis/apps/v1/namespaces/deployment-4964/replicasets/nginx-deployment-55fb7cb77f,UID:4602ac9e-4b6b-4de9-b422-c1b99f178e7d,ResourceVersion:22443934,Generation:3,CreationTimestamp:2020-01-30 13:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 857678c9-c3f3-496b-a01e-0e9baa7e1cd7 0xc002b03987 0xc002b03988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 30 13:51:39.597: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 30 13:51:39.597: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-4964,SelfLink:/apis/apps/v1/namespaces/deployment-4964/replicasets/nginx-deployment-7b8c6f4498,UID:201b2a11-ba30-453c-9d75-bede4af3b2ed,ResourceVersion:22443932,Generation:3,CreationTimestamp:2020-01-30 13:51:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 857678c9-c3f3-496b-a01e-0e9baa7e1cd7 0xc002b03bb7 0xc002b03bb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 30 13:51:40.191: INFO: Pod "nginx-deployment-55fb7cb77f-5ktd2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5ktd2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-55fb7cb77f-5ktd2,UID:d4780257-9a6b-447e-9583-67ddee4fbb46,ResourceVersion:22443897,Generation:0,CreationTimestamp:2020-01-30 13:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4602ac9e-4b6b-4de9-b422-c1b99f178e7d 0xc000630a47 0xc000630a48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000630ac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000630ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-30 13:51:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.192: INFO: Pod "nginx-deployment-55fb7cb77f-6mk7p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6mk7p,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-55fb7cb77f-6mk7p,UID:7ffb3e85-3120-4de0-9afc-61a34ff6fd15,ResourceVersion:22443907,Generation:0,CreationTimestamp:2020-01-30 13:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4602ac9e-4b6b-4de9-b422-c1b99f178e7d 0xc000630cb7 0xc000630cb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000630d80} {node.kubernetes.io/unreachable Exists NoExecute 0xc000630da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-30 13:51:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.192: INFO: Pod "nginx-deployment-55fb7cb77f-782sx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-782sx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-55fb7cb77f-782sx,UID:261ebae3-52e1-4d48-a101-a4f0767620eb,ResourceVersion:22443979,Generation:0,CreationTimestamp:2020-01-30 13:51:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4602ac9e-4b6b-4de9-b422-c1b99f178e7d 0xc000630e87 0xc000630e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000630f20} {node.kubernetes.io/unreachable Exists NoExecute 0xc000630f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.192: INFO: Pod "nginx-deployment-55fb7cb77f-8z4lr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8z4lr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-55fb7cb77f-8z4lr,UID:b5d4d862-1a14-480c-8d28-61f86adc7f9c,ResourceVersion:22443930,Generation:0,CreationTimestamp:2020-01-30 13:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4602ac9e-4b6b-4de9-b422-c1b99f178e7d 0xc000630fe7 0xc000630fe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000631060} {node.kubernetes.io/unreachable Exists NoExecute 0xc000631090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:34 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-30 13:51:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.193: INFO: Pod "nginx-deployment-55fb7cb77f-9wnt2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9wnt2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-55fb7cb77f-9wnt2,UID:59dcad1a-67e7-4ba5-8c71-f599d0501d16,ResourceVersion:22443920,Generation:0,CreationTimestamp:2020-01-30 13:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4602ac9e-4b6b-4de9-b422-c1b99f178e7d 0xc000631167 0xc000631168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0006311d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0006311f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-30 13:51:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.193: INFO: Pod "nginx-deployment-55fb7cb77f-jnrws" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jnrws,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-55fb7cb77f-jnrws,UID:1e293ea7-8cd0-43d4-acaa-d1194b31c744,ResourceVersion:22443941,Generation:0,CreationTimestamp:2020-01-30 13:51:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4602ac9e-4b6b-4de9-b422-c1b99f178e7d 0xc000631317 0xc000631318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000631380} {node.kubernetes.io/unreachable Exists NoExecute 0xc0006313b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.193: INFO: Pod "nginx-deployment-55fb7cb77f-kr6rj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kr6rj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-55fb7cb77f-kr6rj,UID:9214c619-5c6d-4ec2-b189-fd1a78f5b050,ResourceVersion:22443961,Generation:0,CreationTimestamp:2020-01-30 13:51:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4602ac9e-4b6b-4de9-b422-c1b99f178e7d 0xc000631437 0xc000631438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0006314b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0006314d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.193: INFO: Pod "nginx-deployment-55fb7cb77f-lbg9s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lbg9s,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-55fb7cb77f-lbg9s,UID:403e008e-a69f-4c89-91dc-c0d19e72704f,ResourceVersion:22443965,Generation:0,CreationTimestamp:2020-01-30 13:51:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4602ac9e-4b6b-4de9-b422-c1b99f178e7d 0xc000631557 0xc000631558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0006315c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0006315e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.194: INFO: Pod "nginx-deployment-55fb7cb77f-mx4xd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mx4xd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-55fb7cb77f-mx4xd,UID:a43e08d9-bf14-4268-90ba-71669ee579f1,ResourceVersion:22443960,Generation:0,CreationTimestamp:2020-01-30 13:51:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4602ac9e-4b6b-4de9-b422-c1b99f178e7d 0xc000631667 0xc000631668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0006316e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000631720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.194: INFO: Pod "nginx-deployment-55fb7cb77f-pcdh7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pcdh7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-55fb7cb77f-pcdh7,UID:ad1bee74-8281-478b-9a09-b65938e9a1f6,ResourceVersion:22443973,Generation:0,CreationTimestamp:2020-01-30 13:51:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4602ac9e-4b6b-4de9-b422-c1b99f178e7d 0xc0006317a7 0xc0006317a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000631820} {node.kubernetes.io/unreachable Exists NoExecute 0xc000631840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.194: INFO: Pod "nginx-deployment-55fb7cb77f-r9sqt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-r9sqt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-55fb7cb77f-r9sqt,UID:082443a2-c853-439b-aa25-9eae967a0f43,ResourceVersion:22443959,Generation:0,CreationTimestamp:2020-01-30 13:51:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4602ac9e-4b6b-4de9-b422-c1b99f178e7d 0xc0006318d7 0xc0006318d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000631940} {node.kubernetes.io/unreachable Exists NoExecute 0xc000631960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.195: INFO: Pod "nginx-deployment-55fb7cb77f-vx7zh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vx7zh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-55fb7cb77f-vx7zh,UID:1f6ae86b-7e81-4e25-ba15-de528b43b9a5,ResourceVersion:22443896,Generation:0,CreationTimestamp:2020-01-30 13:51:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4602ac9e-4b6b-4de9-b422-c1b99f178e7d 0xc0006319f7 0xc0006319f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000631a60} {node.kubernetes.io/unreachable Exists NoExecute 0xc000631a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:32 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-30 13:51:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.195: INFO: Pod "nginx-deployment-55fb7cb77f-x4pqd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-x4pqd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-55fb7cb77f-x4pqd,UID:dca1a96c-19b4-491d-8d01-f6f06613f0d6,ResourceVersion:22443974,Generation:0,CreationTimestamp:2020-01-30 13:51:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4602ac9e-4b6b-4de9-b422-c1b99f178e7d 0xc000631b57 0xc000631b58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000631bd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000631bf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.195: INFO: Pod "nginx-deployment-7b8c6f4498-769xv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-769xv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-769xv,UID:df6e8038-72bb-46ae-a33e-3bd975209f73,ResourceVersion:22443962,Generation:0,CreationTimestamp:2020-01-30 13:51:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc000631c77 0xc000631c78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000631cf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000631d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.195: INFO: Pod "nginx-deployment-7b8c6f4498-9c8jk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9c8jk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-9c8jk,UID:eefd9def-26ab-4eb8-81f5-d6c8ecd96ae8,ResourceVersion:22443842,Generation:0,CreationTimestamp:2020-01-30 13:51:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc000631dd7 0xc000631dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000631e40} {node.kubernetes.io/unreachable Exists NoExecute 0xc000631e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-30 13:51:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 13:51:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://13816bc0107b6cf0974d0aac60ffae4ca72a8d049905243f367c4ea983ad2edb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.196: INFO: Pod "nginx-deployment-7b8c6f4498-9qk5d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9qk5d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-9qk5d,UID:34e8e3ab-6d3e-46c3-9421-9fb7e7b4420f,ResourceVersion:22443963,Generation:0,CreationTimestamp:2020-01-30 13:51:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc000631f37 0xc000631f38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000631fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000631fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.196: INFO: Pod "nginx-deployment-7b8c6f4498-gpw66" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gpw66,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-gpw66,UID:2523d31b-044b-4801-838e-b00b45566fc7,ResourceVersion:22443975,Generation:0,CreationTimestamp:2020-01-30 13:51:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a0057 0xc0021a0058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a00d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a00f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.196: INFO: Pod "nginx-deployment-7b8c6f4498-j5bn4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j5bn4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-j5bn4,UID:8725808c-e5d6-4674-abe5-2e18687e8c4d,ResourceVersion:22443947,Generation:0,CreationTimestamp:2020-01-30 13:51:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a0187 0xc0021a0188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a0200} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a0220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.196: INFO: Pod "nginx-deployment-7b8c6f4498-j5brb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j5brb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-j5brb,UID:385d7129-8b84-4fa8-8eae-2b60ffabc5ce,ResourceVersion:22443964,Generation:0,CreationTimestamp:2020-01-30 13:51:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a02a7 0xc0021a02a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a0320} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a0340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.196: INFO: Pod "nginx-deployment-7b8c6f4498-kcwkw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kcwkw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-kcwkw,UID:9f7c8135-2cce-478a-a8d3-c4e17eb550b3,ResourceVersion:22443957,Generation:0,CreationTimestamp:2020-01-30 13:51:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a03c7 0xc0021a03c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a0440} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a0460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.197: INFO: Pod "nginx-deployment-7b8c6f4498-lh6j9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lh6j9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-lh6j9,UID:c3218e14-ccc9-4e99-8788-73d77eea988a,ResourceVersion:22443845,Generation:0,CreationTimestamp:2020-01-30 13:51:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a0507 0xc0021a0508}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a0570} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a0590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-30 13:51:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 13:51:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2d1a69a2dd6e0ea7c73022fa2d3079477f5a82328a06d9276fdfad71d07a31fd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.197: INFO: Pod "nginx-deployment-7b8c6f4498-lq4rt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lq4rt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-lq4rt,UID:5286aff4-7bfb-4764-9ecf-742c3eb6cb09,ResourceVersion:22443839,Generation:0,CreationTimestamp:2020-01-30 13:51:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a0667 0xc0021a0668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a06f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a0710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-30 13:51:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 13:51:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5a17dd05a18f14ae68e6f3fade03097a140dcf08f2f03b7e0098785a8d1e746f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.197: INFO: Pod "nginx-deployment-7b8c6f4498-mg4fk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mg4fk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-mg4fk,UID:bb5c1ed6-aa23-4a88-8307-6faefc4e811d,ResourceVersion:22443943,Generation:0,CreationTimestamp:2020-01-30 13:51:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a07e7 0xc0021a07e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a0860} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a0880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.197: INFO: Pod "nginx-deployment-7b8c6f4498-pqkkk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pqkkk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-pqkkk,UID:1425cd7d-1541-48aa-9d5b-04a39111ae04,ResourceVersion:22443848,Generation:0,CreationTimestamp:2020-01-30 13:51:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a0907 0xc0021a0908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a0970} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a0990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-30 13:51:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 13:51:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fa9915a0301334b675321dcaf5e0360b1e972e1736502cddb6f2aa149279fc00}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.198: INFO: Pod "nginx-deployment-7b8c6f4498-pzcgn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pzcgn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-pzcgn,UID:03e190cc-4452-436a-b110-1fd9fd0d28ca,ResourceVersion:22443984,Generation:0,CreationTimestamp:2020-01-30 13:51:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a0a67 0xc0021a0a68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a0ad0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a0af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.198: INFO: Pod "nginx-deployment-7b8c6f4498-q548r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q548r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-q548r,UID:08d65ccb-6b1a-4ed2-9e68-3da4be30d8b4,ResourceVersion:22443980,Generation:0,CreationTimestamp:2020-01-30 13:51:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a0b87 0xc0021a0b88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a0c00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a0c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.198: INFO: Pod "nginx-deployment-7b8c6f4498-rr2m7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rr2m7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-rr2m7,UID:ff52638c-1619-44d4-b636-59475b9b0a4f,ResourceVersion:22443865,Generation:0,CreationTimestamp:2020-01-30 13:51:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a0ca7 0xc0021a0ca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a0d20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a0d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-30 13:51:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 13:51:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://84c9797a38fbd9cbbe4ca0d62ca2cfa11d08f2e5806714aac2ab00b3fcf92eac}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.199: INFO: Pod "nginx-deployment-7b8c6f4498-srl95" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-srl95,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-srl95,UID:2a4d5923-6f26-48ae-a59a-8d35f9269d1a,ResourceVersion:22443859,Generation:0,CreationTimestamp:2020-01-30 13:51:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a0e17 0xc0021a0e18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a0e90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a0eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-30 13:51:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 13:51:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://af477686dda7a79b10b1429b65679da9c86c97b7cf53141fc8610d00bee4533d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.199: INFO: Pod "nginx-deployment-7b8c6f4498-t5ppc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t5ppc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-t5ppc,UID:e4b93b00-a1e9-4f2f-b581-80848969a280,ResourceVersion:22443983,Generation:0,CreationTimestamp:2020-01-30 13:51:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a0f97 0xc0021a0f98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a1010} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a1030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.199: INFO: Pod "nginx-deployment-7b8c6f4498-tsw48" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tsw48,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-tsw48,UID:7ce7507e-f5be-4e02-a382-7bd38173b539,ResourceVersion:22443856,Generation:0,CreationTimestamp:2020-01-30 13:51:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a10b7 0xc0021a10b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a1120} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a1140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-30 13:51:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 13:51:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://31e6222e6707932d783c185660b0ba24d52bd116158c6e930ad6f5dcb920b361}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.199: INFO: Pod "nginx-deployment-7b8c6f4498-v2rwl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v2rwl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-v2rwl,UID:209887f7-e576-4d7d-abb2-e73c053c42c7,ResourceVersion:22443853,Generation:0,CreationTimestamp:2020-01-30 13:51:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a1217 0xc0021a1218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a1280} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a12a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2020-01-30 13:51:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 13:51:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://061a3e01d86d542987a86704289f9c74b404daaf39eecc2b505cce4666f60952}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.200: INFO: Pod "nginx-deployment-7b8c6f4498-v9nhs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v9nhs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-v9nhs,UID:a6fb0073-8277-4b45-b0bd-ed54f7e8cf65,ResourceVersion:22443981,Generation:0,CreationTimestamp:2020-01-30 13:51:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a1377 0xc0021a1378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a13e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a1400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 30 13:51:40.200: INFO: Pod "nginx-deployment-7b8c6f4498-w865s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w865s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4964,SelfLink:/api/v1/namespaces/deployment-4964/pods/nginx-deployment-7b8c6f4498-w865s,UID:23b36211-2bf5-4f51-8bd0-53adde1cecf0,ResourceVersion:22443982,Generation:0,CreationTimestamp:2020-01-30 13:51:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 201b2a11-ba30-453c-9d75-bede4af3b2ed 0xc0021a1487 0xc0021a1488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w4vll {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w4vll,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w4vll true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021a14f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021a1510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:51:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:51:40.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4964" for this suite. Jan 30 13:53:02.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:53:03.963: INFO: namespace deployment-4964 deletion completed in 1m21.55475748s • [SLOW TEST:120.077 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:53:03.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 30 13:53:04.249: INFO: Waiting up to 5m0s for pod "pod-3af5464d-283f-49d2-a4f4-a2e6a5e8b91c" in namespace "emptydir-8401" to be "success or failure" Jan 30 13:53:04.342: INFO: Pod "pod-3af5464d-283f-49d2-a4f4-a2e6a5e8b91c": Phase="Pending", Reason="", readiness=false. Elapsed: 92.695108ms Jan 30 13:53:06.364: INFO: Pod "pod-3af5464d-283f-49d2-a4f4-a2e6a5e8b91c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114367029s Jan 30 13:53:08.380: INFO: Pod "pod-3af5464d-283f-49d2-a4f4-a2e6a5e8b91c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130129995s Jan 30 13:53:10.390: INFO: Pod "pod-3af5464d-283f-49d2-a4f4-a2e6a5e8b91c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140705002s Jan 30 13:53:12.402: INFO: Pod "pod-3af5464d-283f-49d2-a4f4-a2e6a5e8b91c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152953711s Jan 30 13:53:14.410: INFO: Pod "pod-3af5464d-283f-49d2-a4f4-a2e6a5e8b91c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.1604945s Jan 30 13:53:16.420: INFO: Pod "pod-3af5464d-283f-49d2-a4f4-a2e6a5e8b91c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.170497209s STEP: Saw pod success Jan 30 13:53:16.420: INFO: Pod "pod-3af5464d-283f-49d2-a4f4-a2e6a5e8b91c" satisfied condition "success or failure" Jan 30 13:53:16.424: INFO: Trying to get logs from node iruya-node pod pod-3af5464d-283f-49d2-a4f4-a2e6a5e8b91c container test-container: STEP: delete the pod Jan 30 13:53:16.680: INFO: Waiting for pod pod-3af5464d-283f-49d2-a4f4-a2e6a5e8b91c to disappear Jan 30 13:53:16.689: INFO: Pod pod-3af5464d-283f-49d2-a4f4-a2e6a5e8b91c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:53:16.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8401" for this suite. Jan 30 13:53:22.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:53:22.909: INFO: namespace emptydir-8401 deletion completed in 6.214730355s • [SLOW TEST:18.945 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:53:22.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0130 13:53:33.549828 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 30 13:53:33.549: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:53:33.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8405" for this suite. Jan 30 13:53:39.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:53:40.037: INFO: namespace gc-8405 deletion completed in 6.482565373s • [SLOW TEST:17.127 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:53:40.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 30 13:53:40.262: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4073,SelfLink:/api/v1/namespaces/watch-4073/configmaps/e2e-watch-test-resource-version,UID:ffe9d62d-1236-4980-bcb8-2e2d0bbfbb54,ResourceVersion:22444403,Generation:0,CreationTimestamp:2020-01-30 13:53:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 30 13:53:40.263: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4073,SelfLink:/api/v1/namespaces/watch-4073/configmaps/e2e-watch-test-resource-version,UID:ffe9d62d-1236-4980-bcb8-2e2d0bbfbb54,ResourceVersion:22444406,Generation:0,CreationTimestamp:2020-01-30 13:53:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:53:40.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4073" for this suite. Jan 30 13:53:46.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:53:46.459: INFO: namespace watch-4073 deletion completed in 6.183627224s • [SLOW TEST:6.421 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:53:46.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 30 13:53:54.695: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:53:54.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8168" for this suite. Jan 30 13:54:00.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:54:00.931: INFO: namespace container-runtime-8168 deletion completed in 6.146206855s • [SLOW TEST:14.472 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:54:00.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 30 13:54:01.029: INFO: PodSpec: initContainers in spec.initContainers Jan 30 13:55:04.742: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a955b44a-16ff-43e1-8803-76576b1f2dc5", GenerateName:"", Namespace:"init-container-6599", SelfLink:"/api/v1/namespaces/init-container-6599/pods/pod-init-a955b44a-16ff-43e1-8803-76576b1f2dc5", UID:"5675dc5a-5140-432d-8851-3b9bf6258bad", ResourceVersion:"22444572", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715989241, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"29054158"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ndcsd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00030c000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ndcsd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ndcsd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ndcsd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000a36188), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0024423c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000a36310)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000a36330)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000a36338), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000a3633c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715989241, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715989241, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715989241, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715989241, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002c8e500), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0026a8070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0026a80e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://ad86fedd0874d526998f8b2f62dc2f6fee54550a144a2805bb06e0c4fd402a84"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c8e540), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c8e520), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:55:04.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6599" for this suite. Jan 30 13:55:26.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:55:26.927: INFO: namespace init-container-6599 deletion completed in 22.164045972s • [SLOW TEST:85.996 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:55:26.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 30 13:55:35.068: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-f2de4d35-ae55-4347-8edb-15f78d201f33,GenerateName:,Namespace:events-4623,SelfLink:/api/v1/namespaces/events-4623/pods/send-events-f2de4d35-ae55-4347-8edb-15f78d201f33,UID:71d457b5-2282-431b-848f-b64b2c5f3dea,ResourceVersion:22444642,Generation:0,CreationTimestamp:2020-01-30 13:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 989905146,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kqp9v {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kqp9v,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-kqp9v true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c03e30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c03e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:55:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:55:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:55:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 13:55:27 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-30 13:55:27 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-30 13:55:34 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://a9065ce3f0b3546ac06480428a4a1c9f5f130fc9d05d907dd32e0beef10975db}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jan 30 13:55:37.082: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 30 13:55:39.092: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:55:39.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4623" for this suite. Jan 30 13:56:19.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:56:19.340: INFO: namespace events-4623 deletion completed in 40.181397563s • [SLOW TEST:52.412 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:56:19.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-55225084-7f64-47ca-a696-c30a484e00cf in namespace container-probe-7150 Jan 30 13:56:29.453: INFO: Started pod liveness-55225084-7f64-47ca-a696-c30a484e00cf in namespace container-probe-7150 STEP: checking the pod's current state and verifying that restartCount is present Jan 30 13:56:29.459: INFO: Initial restart count of pod liveness-55225084-7f64-47ca-a696-c30a484e00cf is 0 Jan 30 13:56:45.536: INFO: Restart count of pod container-probe-7150/liveness-55225084-7f64-47ca-a696-c30a484e00cf is now 1 (16.077106293s elapsed) Jan 30 13:57:07.680: INFO: Restart count of pod container-probe-7150/liveness-55225084-7f64-47ca-a696-c30a484e00cf is now 2 (38.220919459s elapsed) Jan 30 13:57:27.794: INFO: Restart count of pod container-probe-7150/liveness-55225084-7f64-47ca-a696-c30a484e00cf is now 3 (58.335310681s elapsed) Jan 30 13:57:45.920: INFO: Restart count of pod container-probe-7150/liveness-55225084-7f64-47ca-a696-c30a484e00cf is now 4 (1m16.460683289s elapsed) Jan 30 13:58:46.862: INFO: Restart count of pod container-probe-7150/liveness-55225084-7f64-47ca-a696-c30a484e00cf is now 5 (2m17.403405178s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:58:46.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7150" for this suite. Jan 30 13:58:53.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:58:53.702: INFO: namespace container-probe-7150 deletion completed in 6.727763836s • [SLOW TEST:154.361 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:58:53.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 30 13:59:09.968: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 13:59:09.984: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 13:59:11.984: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 13:59:11.998: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 13:59:13.984: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 13:59:13.995: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 13:59:15.985: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 13:59:15.994: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 13:59:17.985: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 13:59:17.992: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 13:59:19.985: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 13:59:19.995: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 13:59:21.985: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 13:59:21.994: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 13:59:23.985: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 13:59:24.003: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 13:59:25.987: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 13:59:26.013: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 13:59:27.985: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 13:59:27.998: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 13:59:27.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3697" for this suite. Jan 30 13:59:50.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 13:59:50.141: INFO: namespace container-lifecycle-hook-3697 deletion completed in 22.132120448s • [SLOW TEST:56.438 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 13:59:50.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:00:45.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1015" for this suite. Jan 30 14:00:51.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:00:51.851: INFO: namespace container-runtime-1015 deletion completed in 6.211803997s • [SLOW TEST:61.710 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:00:51.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 30 14:00:51.945: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7094ccf2-2976-4c84-bc20-a15891a85926" in namespace "downward-api-1033" to be "success or failure" Jan 30 14:00:52.080: INFO: Pod "downwardapi-volume-7094ccf2-2976-4c84-bc20-a15891a85926": Phase="Pending", Reason="", readiness=false. Elapsed: 133.957079ms Jan 30 14:00:54.092: INFO: Pod "downwardapi-volume-7094ccf2-2976-4c84-bc20-a15891a85926": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14625964s Jan 30 14:00:56.099: INFO: Pod "downwardapi-volume-7094ccf2-2976-4c84-bc20-a15891a85926": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153293821s Jan 30 14:00:58.107: INFO: Pod "downwardapi-volume-7094ccf2-2976-4c84-bc20-a15891a85926": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16144294s Jan 30 14:01:00.114: INFO: Pod "downwardapi-volume-7094ccf2-2976-4c84-bc20-a15891a85926": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.168387303s STEP: Saw pod success Jan 30 14:01:00.114: INFO: Pod "downwardapi-volume-7094ccf2-2976-4c84-bc20-a15891a85926" satisfied condition "success or failure" Jan 30 14:01:00.117: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7094ccf2-2976-4c84-bc20-a15891a85926 container client-container: STEP: delete the pod Jan 30 14:01:00.190: INFO: Waiting for pod downwardapi-volume-7094ccf2-2976-4c84-bc20-a15891a85926 to disappear Jan 30 14:01:00.196: INFO: Pod downwardapi-volume-7094ccf2-2976-4c84-bc20-a15891a85926 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:01:00.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1033" for this suite. Jan 30 14:01:06.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:01:06.384: INFO: namespace downward-api-1033 deletion completed in 6.183128582s • [SLOW TEST:14.532 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:01:06.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-8851/secret-test-cacf11b5-6797-416c-b1ff-32df32b230b2 STEP: Creating a pod to test consume secrets Jan 30 14:01:06.566: INFO: Waiting up to 5m0s for pod "pod-configmaps-6aad3929-61f5-472d-aaa3-0d6e42143708" in namespace "secrets-8851" to be "success or failure" Jan 30 14:01:06.575: INFO: Pod "pod-configmaps-6aad3929-61f5-472d-aaa3-0d6e42143708": Phase="Pending", Reason="", readiness=false. Elapsed: 9.122975ms Jan 30 14:01:08.597: INFO: Pod "pod-configmaps-6aad3929-61f5-472d-aaa3-0d6e42143708": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030936613s Jan 30 14:01:10.610: INFO: Pod "pod-configmaps-6aad3929-61f5-472d-aaa3-0d6e42143708": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043651925s Jan 30 14:01:12.649: INFO: Pod "pod-configmaps-6aad3929-61f5-472d-aaa3-0d6e42143708": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08267987s Jan 30 14:01:14.657: INFO: Pod "pod-configmaps-6aad3929-61f5-472d-aaa3-0d6e42143708": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091372671s STEP: Saw pod success Jan 30 14:01:14.658: INFO: Pod "pod-configmaps-6aad3929-61f5-472d-aaa3-0d6e42143708" satisfied condition "success or failure" Jan 30 14:01:14.661: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6aad3929-61f5-472d-aaa3-0d6e42143708 container env-test: STEP: delete the pod Jan 30 14:01:14.752: INFO: Waiting for pod pod-configmaps-6aad3929-61f5-472d-aaa3-0d6e42143708 to disappear Jan 30 14:01:14.759: INFO: Pod pod-configmaps-6aad3929-61f5-472d-aaa3-0d6e42143708 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:01:14.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8851" for this suite. Jan 30 14:01:20.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:01:20.958: INFO: namespace secrets-8851 deletion completed in 6.188812148s • [SLOW TEST:14.573 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:01:20.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:01:29.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9041" for this suite. Jan 30 14:02:21.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:02:21.326: INFO: namespace kubelet-test-9041 deletion completed in 52.191158178s • [SLOW TEST:60.367 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:02:21.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8646.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8646.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 30 14:02:35.793: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-8646/dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b: the server could not find the requested resource (get pods dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b) Jan 30 14:02:35.809: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-8646/dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b: the server could not find the requested resource (get pods dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b) Jan 30 14:02:35.818: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8646/dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b: the server could not find the requested resource (get pods dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b) Jan 30 14:02:35.826: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8646/dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b: the server could not find the requested resource (get pods dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b) Jan 30 14:02:35.835: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-8646/dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b: the server could not find the requested resource (get pods dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b) Jan 30 14:02:35.843: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-8646/dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b: the server could not find the requested resource (get pods dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b) Jan 30 14:02:35.850: INFO: Unable to read jessie_udp@PodARecord from pod dns-8646/dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b: the server could not find the requested resource (get pods dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b) Jan 30 14:02:35.856: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8646/dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b: the server could not find the requested resource (get pods dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b) Jan 30 14:02:35.856: INFO: Lookups using dns-8646/dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 30 14:02:40.961: INFO: DNS probes using dns-8646/dns-test-3e5f8ed7-6f6f-4204-8c54-2019136b9d2b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:02:41.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8646" for this suite. Jan 30 14:02:47.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:02:47.291: INFO: namespace dns-8646 deletion completed in 6.157946012s • [SLOW TEST:25.964 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:02:47.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-aa7049e3-1b05-4ad9-869e-54beca81ca78 STEP: Creating a pod to test consume configMaps Jan 30 14:02:47.513: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f07f18fa-72c4-4ee9-b3a7-58f1f9fc46c7" in namespace "projected-5227" to be "success or failure" Jan 30 14:02:47.521: INFO: Pod "pod-projected-configmaps-f07f18fa-72c4-4ee9-b3a7-58f1f9fc46c7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.654056ms Jan 30 14:02:49.529: INFO: Pod "pod-projected-configmaps-f07f18fa-72c4-4ee9-b3a7-58f1f9fc46c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015461553s Jan 30 14:02:51.552: INFO: Pod "pod-projected-configmaps-f07f18fa-72c4-4ee9-b3a7-58f1f9fc46c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039360341s Jan 30 14:02:53.570: INFO: Pod "pod-projected-configmaps-f07f18fa-72c4-4ee9-b3a7-58f1f9fc46c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056756337s Jan 30 14:02:55.589: INFO: Pod "pod-projected-configmaps-f07f18fa-72c4-4ee9-b3a7-58f1f9fc46c7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076425556s Jan 30 14:02:57.599: INFO: Pod "pod-projected-configmaps-f07f18fa-72c4-4ee9-b3a7-58f1f9fc46c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085825253s STEP: Saw pod success Jan 30 14:02:57.599: INFO: Pod "pod-projected-configmaps-f07f18fa-72c4-4ee9-b3a7-58f1f9fc46c7" satisfied condition "success or failure" Jan 30 14:02:57.605: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f07f18fa-72c4-4ee9-b3a7-58f1f9fc46c7 container projected-configmap-volume-test: STEP: delete the pod Jan 30 14:02:57.669: INFO: Waiting for pod pod-projected-configmaps-f07f18fa-72c4-4ee9-b3a7-58f1f9fc46c7 to disappear Jan 30 14:02:57.680: INFO: Pod pod-projected-configmaps-f07f18fa-72c4-4ee9-b3a7-58f1f9fc46c7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:02:57.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5227" for this suite. Jan 30 14:03:03.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:03:03.868: INFO: namespace projected-5227 deletion completed in 6.1774115s • [SLOW TEST:16.577 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:03:03.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 30 14:03:04.029: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5597,SelfLink:/api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-configmap-a,UID:7c5c5f40-84ff-431e-acb2-d730b43f6030,ResourceVersion:22445559,Generation:0,CreationTimestamp:2020-01-30 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 30 14:03:04.030: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5597,SelfLink:/api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-configmap-a,UID:7c5c5f40-84ff-431e-acb2-d730b43f6030,ResourceVersion:22445559,Generation:0,CreationTimestamp:2020-01-30 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 30 14:03:14.052: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5597,SelfLink:/api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-configmap-a,UID:7c5c5f40-84ff-431e-acb2-d730b43f6030,ResourceVersion:22445573,Generation:0,CreationTimestamp:2020-01-30 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 30 14:03:14.053: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5597,SelfLink:/api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-configmap-a,UID:7c5c5f40-84ff-431e-acb2-d730b43f6030,ResourceVersion:22445573,Generation:0,CreationTimestamp:2020-01-30 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 30 14:03:24.067: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5597,SelfLink:/api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-configmap-a,UID:7c5c5f40-84ff-431e-acb2-d730b43f6030,ResourceVersion:22445588,Generation:0,CreationTimestamp:2020-01-30 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 30 14:03:24.067: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5597,SelfLink:/api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-configmap-a,UID:7c5c5f40-84ff-431e-acb2-d730b43f6030,ResourceVersion:22445588,Generation:0,CreationTimestamp:2020-01-30 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 30 14:03:34.087: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5597,SelfLink:/api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-configmap-a,UID:7c5c5f40-84ff-431e-acb2-d730b43f6030,ResourceVersion:22445602,Generation:0,CreationTimestamp:2020-01-30 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 30 14:03:34.087: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5597,SelfLink:/api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-configmap-a,UID:7c5c5f40-84ff-431e-acb2-d730b43f6030,ResourceVersion:22445602,Generation:0,CreationTimestamp:2020-01-30 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 30 14:03:44.105: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5597,SelfLink:/api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-configmap-b,UID:322e7cd5-2e67-4a84-8f6d-eb9e4d723ded,ResourceVersion:22445617,Generation:0,CreationTimestamp:2020-01-30 14:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 30 14:03:44.105: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5597,SelfLink:/api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-configmap-b,UID:322e7cd5-2e67-4a84-8f6d-eb9e4d723ded,ResourceVersion:22445617,Generation:0,CreationTimestamp:2020-01-30 14:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 30 14:03:54.128: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5597,SelfLink:/api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-configmap-b,UID:322e7cd5-2e67-4a84-8f6d-eb9e4d723ded,ResourceVersion:22445631,Generation:0,CreationTimestamp:2020-01-30 14:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 30 14:03:54.129: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5597,SelfLink:/api/v1/namespaces/watch-5597/configmaps/e2e-watch-test-configmap-b,UID:322e7cd5-2e67-4a84-8f6d-eb9e4d723ded,ResourceVersion:22445631,Generation:0,CreationTimestamp:2020-01-30 14:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:04:04.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5597" for this suite. Jan 30 14:04:10.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:04:10.300: INFO: namespace watch-5597 deletion completed in 6.161665037s • [SLOW TEST:66.432 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:04:10.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:04:16.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1227" for this suite. Jan 30 14:04:22.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:04:23.022: INFO: namespace namespaces-1227 deletion completed in 6.277040804s STEP: Destroying namespace "nsdeletetest-7500" for this suite. Jan 30 14:04:23.027: INFO: Namespace nsdeletetest-7500 was already deleted STEP: Destroying namespace "nsdeletetest-4651" for this suite. Jan 30 14:04:29.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:04:29.251: INFO: namespace nsdeletetest-4651 deletion completed in 6.224497635s • [SLOW TEST:18.951 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:04:29.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-312bfc6a-a428-44d4-a6ed-da9e185c97c0 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:04:29.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-44" for this suite. Jan 30 14:04:35.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:04:35.542: INFO: namespace configmap-44 deletion completed in 6.142078042s • [SLOW TEST:6.290 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:04:35.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-f313a5f4-f496-406d-9c0f-70bae9ed7779 STEP: Creating a pod to test consume configMaps Jan 30 14:04:35.746: INFO: Waiting up to 5m0s for pod "pod-configmaps-00c2c296-1e93-4bb2-9271-e1c31e71d90d" in namespace "configmap-6235" to be "success or failure" Jan 30 14:04:35.761: INFO: Pod "pod-configmaps-00c2c296-1e93-4bb2-9271-e1c31e71d90d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.868815ms Jan 30 14:04:37.772: INFO: Pod "pod-configmaps-00c2c296-1e93-4bb2-9271-e1c31e71d90d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025906825s Jan 30 14:04:39.784: INFO: Pod "pod-configmaps-00c2c296-1e93-4bb2-9271-e1c31e71d90d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038009023s Jan 30 14:04:41.797: INFO: Pod "pod-configmaps-00c2c296-1e93-4bb2-9271-e1c31e71d90d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051015612s Jan 30 14:04:43.807: INFO: Pod "pod-configmaps-00c2c296-1e93-4bb2-9271-e1c31e71d90d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060869645s Jan 30 14:04:45.820: INFO: Pod "pod-configmaps-00c2c296-1e93-4bb2-9271-e1c31e71d90d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073973052s STEP: Saw pod success Jan 30 14:04:45.820: INFO: Pod "pod-configmaps-00c2c296-1e93-4bb2-9271-e1c31e71d90d" satisfied condition "success or failure" Jan 30 14:04:45.828: INFO: Trying to get logs from node iruya-node pod pod-configmaps-00c2c296-1e93-4bb2-9271-e1c31e71d90d container configmap-volume-test: STEP: delete the pod Jan 30 14:04:45.940: INFO: Waiting for pod pod-configmaps-00c2c296-1e93-4bb2-9271-e1c31e71d90d to disappear Jan 30 14:04:46.008: INFO: Pod pod-configmaps-00c2c296-1e93-4bb2-9271-e1c31e71d90d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:04:46.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6235" for this suite. Jan 30 14:04:52.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:04:52.170: INFO: namespace configmap-6235 deletion completed in 6.147971235s • [SLOW TEST:16.628 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:04:52.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-1873/configmap-test-a2df0a78-0256-4051-8701-65476e8f1c56 STEP: Creating a pod to test consume configMaps Jan 30 14:04:52.275: INFO: Waiting up to 5m0s for pod "pod-configmaps-927d5fd7-2ef0-4503-b399-434abf96c4c4" in namespace "configmap-1873" to be "success or failure" Jan 30 14:04:52.334: INFO: Pod "pod-configmaps-927d5fd7-2ef0-4503-b399-434abf96c4c4": Phase="Pending", Reason="", readiness=false. Elapsed: 58.89612ms Jan 30 14:04:54.375: INFO: Pod "pod-configmaps-927d5fd7-2ef0-4503-b399-434abf96c4c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099410548s Jan 30 14:04:56.419: INFO: Pod "pod-configmaps-927d5fd7-2ef0-4503-b399-434abf96c4c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143596651s Jan 30 14:04:58.426: INFO: Pod "pod-configmaps-927d5fd7-2ef0-4503-b399-434abf96c4c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150780811s Jan 30 14:05:00.447: INFO: Pod "pod-configmaps-927d5fd7-2ef0-4503-b399-434abf96c4c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.171221207s STEP: Saw pod success Jan 30 14:05:00.447: INFO: Pod "pod-configmaps-927d5fd7-2ef0-4503-b399-434abf96c4c4" satisfied condition "success or failure" Jan 30 14:05:00.451: INFO: Trying to get logs from node iruya-node pod pod-configmaps-927d5fd7-2ef0-4503-b399-434abf96c4c4 container env-test: STEP: delete the pod Jan 30 14:05:00.609: INFO: Waiting for pod pod-configmaps-927d5fd7-2ef0-4503-b399-434abf96c4c4 to disappear Jan 30 14:05:00.617: INFO: Pod pod-configmaps-927d5fd7-2ef0-4503-b399-434abf96c4c4 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:05:00.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1873" for this suite. Jan 30 14:05:06.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:05:06.803: INFO: namespace configmap-1873 deletion completed in 6.1796219s • [SLOW TEST:14.632 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:05:06.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-kqwl STEP: Creating a pod to test atomic-volume-subpath Jan 30 14:05:06.966: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kqwl" in namespace "subpath-8911" to be "success or failure" Jan 30 14:05:06.977: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Pending", Reason="", readiness=false. Elapsed: 11.088112ms Jan 30 14:05:08.988: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02198497s Jan 30 14:05:10.999: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032621204s Jan 30 14:05:13.004: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038250826s Jan 30 14:05:15.014: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047977052s Jan 30 14:05:17.020: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Running", Reason="", readiness=true. Elapsed: 10.054153891s Jan 30 14:05:19.030: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Running", Reason="", readiness=true. Elapsed: 12.063877575s Jan 30 14:05:21.040: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Running", Reason="", readiness=true. Elapsed: 14.074379988s Jan 30 14:05:23.053: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Running", Reason="", readiness=true. Elapsed: 16.086641963s Jan 30 14:05:25.065: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Running", Reason="", readiness=true. Elapsed: 18.099043505s Jan 30 14:05:27.078: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Running", Reason="", readiness=true. Elapsed: 20.111438926s Jan 30 14:05:29.212: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Running", Reason="", readiness=true. Elapsed: 22.24552428s Jan 30 14:05:31.222: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Running", Reason="", readiness=true. Elapsed: 24.256297408s Jan 30 14:05:33.235: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Running", Reason="", readiness=true. Elapsed: 26.268761066s Jan 30 14:05:35.245: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Running", Reason="", readiness=true. Elapsed: 28.279048399s Jan 30 14:05:37.256: INFO: Pod "pod-subpath-test-configmap-kqwl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.290275131s STEP: Saw pod success Jan 30 14:05:37.257: INFO: Pod "pod-subpath-test-configmap-kqwl" satisfied condition "success or failure" Jan 30 14:05:37.263: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-kqwl container test-container-subpath-configmap-kqwl: STEP: delete the pod Jan 30 14:05:37.526: INFO: Waiting for pod pod-subpath-test-configmap-kqwl to disappear Jan 30 14:05:37.537: INFO: Pod pod-subpath-test-configmap-kqwl no longer exists STEP: Deleting pod pod-subpath-test-configmap-kqwl Jan 30 14:05:37.538: INFO: Deleting pod "pod-subpath-test-configmap-kqwl" in namespace "subpath-8911" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:05:37.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8911" for this suite. Jan 30 14:05:43.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:05:43.822: INFO: namespace subpath-8911 deletion completed in 6.271015417s • [SLOW TEST:37.018 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:05:43.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4570 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jan 30 14:05:43.987: INFO: Found 0 stateful pods, waiting for 3 Jan 30 14:05:54.002: INFO: Found 2 stateful pods, waiting for 3 Jan 30 14:06:04.004: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 14:06:04.004: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 14:06:04.004: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 14:06:14.004: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 14:06:14.004: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 14:06:14.004: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 30 14:06:14.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4570 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 30 14:06:16.888: INFO: stderr: "I0130 14:06:16.612111 2823 log.go:172] (0xc000138000) (0xc000624320) Create stream\nI0130 14:06:16.612643 2823 log.go:172] (0xc000138000) (0xc000624320) Stream added, broadcasting: 1\nI0130 14:06:16.623429 2823 log.go:172] (0xc000138000) Reply frame received for 1\nI0130 14:06:16.623642 2823 log.go:172] (0xc000138000) (0xc0006e60a0) Create stream\nI0130 14:06:16.623662 2823 log.go:172] (0xc000138000) (0xc0006e60a0) Stream added, broadcasting: 3\nI0130 14:06:16.625240 2823 log.go:172] (0xc000138000) Reply frame received for 3\nI0130 14:06:16.625276 2823 log.go:172] (0xc000138000) (0xc0002c2000) Create stream\nI0130 14:06:16.625306 2823 log.go:172] (0xc000138000) (0xc0002c2000) Stream added, broadcasting: 5\nI0130 14:06:16.626476 2823 log.go:172] (0xc000138000) Reply frame received for 5\nI0130 14:06:16.716042 2823 log.go:172] (0xc000138000) Data frame received for 5\nI0130 14:06:16.716095 2823 log.go:172] (0xc0002c2000) (5) Data frame handling\nI0130 14:06:16.716121 2823 log.go:172] (0xc0002c2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0130 14:06:16.739402 2823 log.go:172] (0xc000138000) Data frame received for 3\nI0130 14:06:16.739460 2823 log.go:172] (0xc0006e60a0) (3) Data frame handling\nI0130 14:06:16.739478 2823 log.go:172] (0xc0006e60a0) (3) Data frame sent\nI0130 14:06:16.873051 2823 log.go:172] (0xc000138000) Data frame received for 1\nI0130 14:06:16.873715 2823 log.go:172] (0xc000138000) (0xc0006e60a0) Stream removed, broadcasting: 3\nI0130 14:06:16.873807 2823 log.go:172] (0xc000624320) (1) Data frame handling\nI0130 14:06:16.873863 2823 log.go:172] (0xc000138000) (0xc0002c2000) Stream removed, broadcasting: 5\nI0130 14:06:16.873933 2823 log.go:172] (0xc000624320) (1) Data frame sent\nI0130 14:06:16.873967 2823 log.go:172] (0xc000138000) (0xc000624320) Stream removed, broadcasting: 1\nI0130 14:06:16.874006 2823 log.go:172] (0xc000138000) Go away received\nI0130 14:06:16.875782 2823 log.go:172] (0xc000138000) (0xc000624320) Stream removed, broadcasting: 1\nI0130 14:06:16.875796 2823 log.go:172] (0xc000138000) (0xc0006e60a0) Stream removed, broadcasting: 3\nI0130 14:06:16.875801 2823 log.go:172] (0xc000138000) (0xc0002c2000) Stream removed, broadcasting: 5\n" Jan 30 14:06:16.888: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 30 14:06:16.888: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 30 14:06:16.982: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 30 14:06:27.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4570 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 14:06:27.473: INFO: stderr: "I0130 14:06:27.293065 2855 log.go:172] (0xc0009cc420) (0xc0003306e0) Create stream\nI0130 14:06:27.293283 2855 log.go:172] (0xc0009cc420) (0xc0003306e0) Stream added, broadcasting: 1\nI0130 14:06:27.296595 2855 log.go:172] (0xc0009cc420) Reply frame received for 1\nI0130 14:06:27.296657 2855 log.go:172] (0xc0009cc420) (0xc000908000) Create stream\nI0130 14:06:27.296671 2855 log.go:172] (0xc0009cc420) (0xc000908000) Stream added, broadcasting: 3\nI0130 14:06:27.297645 2855 log.go:172] (0xc0009cc420) Reply frame received for 3\nI0130 14:06:27.297691 2855 log.go:172] (0xc0009cc420) (0xc000a2a000) Create stream\nI0130 14:06:27.297708 2855 log.go:172] (0xc0009cc420) (0xc000a2a000) Stream added, broadcasting: 5\nI0130 14:06:27.298619 2855 log.go:172] (0xc0009cc420) Reply frame received for 5\nI0130 14:06:27.381823 2855 log.go:172] (0xc0009cc420) Data frame received for 3\nI0130 14:06:27.381988 2855 log.go:172] (0xc000908000) (3) Data frame handling\nI0130 14:06:27.382025 2855 log.go:172] (0xc000908000) (3) Data frame sent\nI0130 14:06:27.382069 2855 log.go:172] (0xc0009cc420) Data frame received for 5\nI0130 14:06:27.382085 2855 log.go:172] (0xc000a2a000) (5) Data frame handling\nI0130 14:06:27.382104 2855 log.go:172] (0xc000a2a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0130 14:06:27.458945 2855 log.go:172] (0xc0009cc420) Data frame received for 1\nI0130 14:06:27.459238 2855 log.go:172] (0xc0009cc420) (0xc000908000) Stream removed, broadcasting: 3\nI0130 14:06:27.459908 2855 log.go:172] (0xc0003306e0) (1) Data frame handling\nI0130 14:06:27.459982 2855 log.go:172] (0xc0003306e0) (1) Data frame sent\nI0130 14:06:27.460043 2855 log.go:172] (0xc0009cc420) (0xc0003306e0) Stream removed, broadcasting: 1\nI0130 14:06:27.461939 2855 log.go:172] (0xc0009cc420) (0xc000a2a000) Stream removed, broadcasting: 5\nI0130 14:06:27.462259 2855 log.go:172] (0xc0009cc420) Go away received\nI0130 14:06:27.462706 2855 log.go:172] (0xc0009cc420) (0xc0003306e0) Stream removed, broadcasting: 1\nI0130 14:06:27.462758 2855 log.go:172] (0xc0009cc420) (0xc000908000) Stream removed, broadcasting: 3\nI0130 14:06:27.462797 2855 log.go:172] (0xc0009cc420) (0xc000a2a000) Stream removed, broadcasting: 5\n" Jan 30 14:06:27.474: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 30 14:06:27.474: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 30 14:06:37.517: INFO: Waiting for StatefulSet statefulset-4570/ss2 to complete update Jan 30 14:06:37.517: INFO: Waiting for Pod statefulset-4570/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 30 14:06:37.517: INFO: Waiting for Pod statefulset-4570/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 30 14:06:47.548: INFO: Waiting for StatefulSet statefulset-4570/ss2 to complete update Jan 30 14:06:47.549: INFO: Waiting for Pod statefulset-4570/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 30 14:06:47.549: INFO: Waiting for Pod statefulset-4570/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 30 14:06:57.537: INFO: Waiting for StatefulSet statefulset-4570/ss2 to complete update Jan 30 14:06:57.537: INFO: Waiting for Pod statefulset-4570/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 30 14:06:57.537: INFO: Waiting for Pod statefulset-4570/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 30 14:07:07.536: INFO: Waiting for StatefulSet statefulset-4570/ss2 to complete update Jan 30 14:07:07.536: INFO: Waiting for Pod statefulset-4570/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 30 14:07:17.528: INFO: Waiting for StatefulSet statefulset-4570/ss2 to complete update STEP: Rolling back to a previous revision Jan 30 14:07:27.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4570 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 30 14:07:28.068: INFO: stderr: "I0130 14:07:27.772248 2878 log.go:172] (0xc0009a2370) (0xc000584640) Create stream\nI0130 14:07:27.772595 2878 log.go:172] (0xc0009a2370) (0xc000584640) Stream added, broadcasting: 1\nI0130 14:07:27.777834 2878 log.go:172] (0xc0009a2370) Reply frame received for 1\nI0130 14:07:27.777977 2878 log.go:172] (0xc0009a2370) (0xc000926000) Create stream\nI0130 14:07:27.778012 2878 log.go:172] (0xc0009a2370) (0xc000926000) Stream added, broadcasting: 3\nI0130 14:07:27.783243 2878 log.go:172] (0xc0009a2370) Reply frame received for 3\nI0130 14:07:27.783291 2878 log.go:172] (0xc0009a2370) (0xc0005846e0) Create stream\nI0130 14:07:27.783300 2878 log.go:172] (0xc0009a2370) (0xc0005846e0) Stream added, broadcasting: 5\nI0130 14:07:27.787057 2878 log.go:172] (0xc0009a2370) Reply frame received for 5\nI0130 14:07:27.936132 2878 log.go:172] (0xc0009a2370) Data frame received for 5\nI0130 14:07:27.936545 2878 log.go:172] (0xc0005846e0) (5) Data frame handling\nI0130 14:07:27.936641 2878 log.go:172] (0xc0005846e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0130 14:07:27.967254 2878 log.go:172] (0xc0009a2370) Data frame received for 3\nI0130 14:07:27.967342 2878 log.go:172] (0xc000926000) (3) Data frame handling\nI0130 14:07:27.967377 2878 log.go:172] (0xc000926000) (3) Data frame sent\nI0130 14:07:28.059381 2878 log.go:172] (0xc0009a2370) Data frame received for 1\nI0130 14:07:28.059619 2878 log.go:172] (0xc0009a2370) (0xc0005846e0) Stream removed, broadcasting: 5\nI0130 14:07:28.059695 2878 log.go:172] (0xc000584640) (1) Data frame handling\nI0130 14:07:28.059729 2878 log.go:172] (0xc0009a2370) (0xc000926000) Stream removed, broadcasting: 3\nI0130 14:07:28.059789 2878 log.go:172] (0xc000584640) (1) Data frame sent\nI0130 14:07:28.059806 2878 log.go:172] (0xc0009a2370) (0xc000584640) Stream removed, broadcasting: 1\nI0130 14:07:28.059824 2878 log.go:172] (0xc0009a2370) Go away received\nI0130 14:07:28.060831 2878 log.go:172] (0xc0009a2370) (0xc000584640) Stream removed, broadcasting: 1\nI0130 14:07:28.060843 2878 log.go:172] (0xc0009a2370) (0xc000926000) Stream removed, broadcasting: 3\nI0130 14:07:28.060850 2878 log.go:172] (0xc0009a2370) (0xc0005846e0) Stream removed, broadcasting: 5\n" Jan 30 14:07:28.068: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 30 14:07:28.068: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 30 14:07:38.119: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 30 14:07:48.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4570 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 30 14:07:48.652: INFO: stderr: "I0130 14:07:48.442178 2896 log.go:172] (0xc000a32420) (0xc0003bc820) Create stream\nI0130 14:07:48.442667 2896 log.go:172] (0xc000a32420) (0xc0003bc820) Stream added, broadcasting: 1\nI0130 14:07:48.449982 2896 log.go:172] (0xc000a32420) Reply frame received for 1\nI0130 14:07:48.450633 2896 log.go:172] (0xc000a32420) (0xc0009e6000) Create stream\nI0130 14:07:48.450702 2896 log.go:172] (0xc000a32420) (0xc0009e6000) Stream added, broadcasting: 3\nI0130 14:07:48.456960 2896 log.go:172] (0xc000a32420) Reply frame received for 3\nI0130 14:07:48.457157 2896 log.go:172] (0xc000a32420) (0xc0009e60a0) Create stream\nI0130 14:07:48.457211 2896 log.go:172] (0xc000a32420) (0xc0009e60a0) Stream added, broadcasting: 5\nI0130 14:07:48.460209 2896 log.go:172] (0xc000a32420) Reply frame received for 5\nI0130 14:07:48.553721 2896 log.go:172] (0xc000a32420) Data frame received for 3\nI0130 14:07:48.553934 2896 log.go:172] (0xc0009e6000) (3) Data frame handling\nI0130 14:07:48.553980 2896 log.go:172] (0xc0009e6000) (3) Data frame sent\nI0130 14:07:48.554033 2896 log.go:172] (0xc000a32420) Data frame received for 5\nI0130 14:07:48.554070 2896 log.go:172] (0xc0009e60a0) (5) Data frame handling\nI0130 14:07:48.554112 2896 log.go:172] (0xc0009e60a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0130 14:07:48.641484 2896 log.go:172] (0xc000a32420) (0xc0009e60a0) Stream removed, broadcasting: 5\nI0130 14:07:48.641631 2896 log.go:172] (0xc000a32420) Data frame received for 1\nI0130 14:07:48.641766 2896 log.go:172] (0xc000a32420) (0xc0009e6000) Stream removed, broadcasting: 3\nI0130 14:07:48.641823 2896 log.go:172] (0xc0003bc820) (1) Data frame handling\nI0130 14:07:48.641843 2896 log.go:172] (0xc0003bc820) (1) Data frame sent\nI0130 14:07:48.641857 2896 log.go:172] (0xc000a32420) (0xc0003bc820) Stream removed, broadcasting: 1\nI0130 14:07:48.641870 2896 log.go:172] (0xc000a32420) Go away received\nI0130 14:07:48.643205 2896 log.go:172] (0xc000a32420) (0xc0003bc820) Stream removed, broadcasting: 1\nI0130 14:07:48.643245 2896 log.go:172] (0xc000a32420) (0xc0009e6000) Stream removed, broadcasting: 3\nI0130 14:07:48.643261 2896 log.go:172] (0xc000a32420) (0xc0009e60a0) Stream removed, broadcasting: 5\n" Jan 30 14:07:48.653: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 30 14:07:48.653: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 30 14:07:58.692: INFO: Waiting for StatefulSet statefulset-4570/ss2 to complete update Jan 30 14:07:58.692: INFO: Waiting for Pod statefulset-4570/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 30 14:07:58.692: INFO: Waiting for Pod statefulset-4570/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 30 14:08:08.733: INFO: Waiting for StatefulSet statefulset-4570/ss2 to complete update Jan 30 14:08:08.733: INFO: Waiting for Pod statefulset-4570/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 30 14:08:18.707: INFO: Waiting for StatefulSet statefulset-4570/ss2 to complete update Jan 30 14:08:18.707: INFO: Waiting for Pod statefulset-4570/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 30 14:08:28.709: INFO: Waiting for StatefulSet statefulset-4570/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 30 14:08:38.719: INFO: Deleting all statefulset in ns statefulset-4570 Jan 30 14:08:38.725: INFO: Scaling statefulset ss2 to 0 Jan 30 14:09:08.785: INFO: Waiting for statefulset status.replicas updated to 0 Jan 30 14:09:08.791: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:09:08.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4570" for this suite. Jan 30 14:09:16.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:09:17.013: INFO: namespace statefulset-4570 deletion completed in 8.184984236s • [SLOW TEST:213.191 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:09:17.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 30 14:09:17.159: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 30 14:09:17.171: INFO: Waiting for terminating namespaces to be deleted... Jan 30 14:09:17.175: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 30 14:09:17.196: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 30 14:09:17.197: INFO: Container kube-proxy ready: true, restart count 0 Jan 30 14:09:17.197: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 30 14:09:17.197: INFO: Container weave ready: true, restart count 0 Jan 30 14:09:17.197: INFO: Container weave-npc ready: true, restart count 0 Jan 30 14:09:17.197: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 30 14:09:17.216: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 30 14:09:17.216: INFO: Container kube-apiserver ready: true, restart count 0 Jan 30 14:09:17.216: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 30 14:09:17.216: INFO: Container coredns ready: true, restart count 0 Jan 30 14:09:17.216: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 30 14:09:17.216: INFO: Container kube-scheduler ready: true, restart count 13 Jan 30 14:09:17.216: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 30 14:09:17.216: INFO: Container weave ready: true, restart count 0 Jan 30 14:09:17.216: INFO: Container weave-npc ready: true, restart count 0 Jan 30 14:09:17.216: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 30 14:09:17.216: INFO: Container coredns ready: true, restart count 0 Jan 30 14:09:17.216: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 30 14:09:17.216: INFO: Container etcd ready: true, restart count 0 Jan 30 14:09:17.216: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 30 14:09:17.216: INFO: Container kube-proxy ready: true, restart count 0 Jan 30 14:09:17.216: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 30 14:09:17.216: INFO: Container kube-controller-manager ready: true, restart count 19 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0bc57460-970f-4387-8dde-82c422317668 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-0bc57460-970f-4387-8dde-82c422317668 off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-0bc57460-970f-4387-8dde-82c422317668 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:09:35.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7440" for this suite. Jan 30 14:10:05.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:10:05.700: INFO: namespace sched-pred-7440 deletion completed in 30.226535671s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:48.687 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:10:05.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 30 14:10:05.859: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:10:16.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8669" for this suite. Jan 30 14:10:58.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:10:58.850: INFO: namespace pods-8669 deletion completed in 42.408037762s • [SLOW TEST:53.149 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:10:58.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 30 14:10:59.008: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8960,SelfLink:/api/v1/namespaces/watch-8960/configmaps/e2e-watch-test-label-changed,UID:af1e9411-88f5-438f-b5e9-41b25a91c0b5,ResourceVersion:22446733,Generation:0,CreationTimestamp:2020-01-30 14:10:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 30 14:10:59.009: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8960,SelfLink:/api/v1/namespaces/watch-8960/configmaps/e2e-watch-test-label-changed,UID:af1e9411-88f5-438f-b5e9-41b25a91c0b5,ResourceVersion:22446734,Generation:0,CreationTimestamp:2020-01-30 14:10:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 30 14:10:59.009: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8960,SelfLink:/api/v1/namespaces/watch-8960/configmaps/e2e-watch-test-label-changed,UID:af1e9411-88f5-438f-b5e9-41b25a91c0b5,ResourceVersion:22446735,Generation:0,CreationTimestamp:2020-01-30 14:10:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 30 14:11:09.054: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8960,SelfLink:/api/v1/namespaces/watch-8960/configmaps/e2e-watch-test-label-changed,UID:af1e9411-88f5-438f-b5e9-41b25a91c0b5,ResourceVersion:22446750,Generation:0,CreationTimestamp:2020-01-30 14:10:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 30 14:11:09.055: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8960,SelfLink:/api/v1/namespaces/watch-8960/configmaps/e2e-watch-test-label-changed,UID:af1e9411-88f5-438f-b5e9-41b25a91c0b5,ResourceVersion:22446751,Generation:0,CreationTimestamp:2020-01-30 14:10:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 30 14:11:09.055: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8960,SelfLink:/api/v1/namespaces/watch-8960/configmaps/e2e-watch-test-label-changed,UID:af1e9411-88f5-438f-b5e9-41b25a91c0b5,ResourceVersion:22446752,Generation:0,CreationTimestamp:2020-01-30 14:10:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:11:09.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8960" for this suite. Jan 30 14:11:15.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:11:15.232: INFO: namespace watch-8960 deletion completed in 6.169386705s • [SLOW TEST:16.381 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:11:15.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 30 14:11:39.421: INFO: Container started at 2020-01-30 14:11:23 +0000 UTC, pod became ready at 2020-01-30 14:11:39 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:11:39.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2219" for this suite. Jan 30 14:12:01.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:12:01.582: INFO: namespace container-probe-2219 deletion completed in 22.150380554s • [SLOW TEST:46.349 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:12:01.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5854 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5854 STEP: Creating statefulset with conflicting port in namespace statefulset-5854 STEP: Waiting until pod test-pod will start running in namespace statefulset-5854 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5854 Jan 30 14:12:11.861: INFO: Observed stateful pod in namespace: statefulset-5854, name: ss-0, uid: 29333f9b-3691-49ee-b6be-c0372e9526b7, status phase: Pending. Waiting for statefulset controller to delete. Jan 30 14:12:16.492: INFO: Observed stateful pod in namespace: statefulset-5854, name: ss-0, uid: 29333f9b-3691-49ee-b6be-c0372e9526b7, status phase: Failed. Waiting for statefulset controller to delete. Jan 30 14:12:16.563: INFO: Observed stateful pod in namespace: statefulset-5854, name: ss-0, uid: 29333f9b-3691-49ee-b6be-c0372e9526b7, status phase: Failed. Waiting for statefulset controller to delete. Jan 30 14:12:16.576: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5854 STEP: Removing pod with conflicting port in namespace statefulset-5854 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5854 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 30 14:12:26.807: INFO: Deleting all statefulset in ns statefulset-5854 Jan 30 14:12:26.813: INFO: Scaling statefulset ss to 0 Jan 30 14:12:36.845: INFO: Waiting for statefulset status.replicas updated to 0 Jan 30 14:12:36.852: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:12:36.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5854" for this suite. Jan 30 14:12:42.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:12:43.055: INFO: namespace statefulset-5854 deletion completed in 6.162693656s • [SLOW TEST:41.472 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:12:43.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 30 14:12:43.177: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 30 14:12:48.185: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 30 14:12:52.196: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 30 14:12:54.202: INFO: Creating deployment "test-rollover-deployment" Jan 30 14:12:54.220: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 30 14:12:56.232: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 30 14:12:56.242: INFO: Ensure that both replica sets have 1 created replica Jan 30 14:12:56.251: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 30 14:12:56.269: INFO: Updating deployment test-rollover-deployment Jan 30 14:12:56.269: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 30 14:12:58.304: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 30 14:12:58.316: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 30 14:12:58.334: INFO: all replica sets need to contain the pod-template-hash label Jan 30 14:12:58.335: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 14:13:00.351: INFO: all replica sets need to contain the pod-template-hash label Jan 30 14:13:00.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 14:13:02.348: INFO: all replica sets need to contain the pod-template-hash label Jan 30 14:13:02.348: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 14:13:04.345: INFO: all replica sets need to contain the pod-template-hash label Jan 30 14:13:04.346: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 14:13:06.353: INFO: all replica sets need to contain the pod-template-hash label Jan 30 14:13:06.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990385, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 14:13:08.352: INFO: all replica sets need to contain the pod-template-hash label Jan 30 14:13:08.352: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990385, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 14:13:10.360: INFO: all replica sets need to contain the pod-template-hash label Jan 30 14:13:10.361: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990385, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 14:13:12.349: INFO: all replica sets need to contain the pod-template-hash label Jan 30 14:13:12.349: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990385, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 14:13:14.349: INFO: all replica sets need to contain the pod-template-hash label Jan 30 14:13:14.350: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990385, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990374, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 14:13:16.364: INFO: Jan 30 14:13:16.364: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 30 14:13:16.373: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5116,SelfLink:/apis/apps/v1/namespaces/deployment-5116/deployments/test-rollover-deployment,UID:d2ccce86-fc3a-4a0b-a867-8644678bf436,ResourceVersion:22447141,Generation:2,CreationTimestamp:2020-01-30 14:12:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-30 14:12:54 +0000 UTC 2020-01-30 14:12:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-30 14:13:16 +0000 UTC 2020-01-30 14:12:54 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 30 14:13:16.376: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5116,SelfLink:/apis/apps/v1/namespaces/deployment-5116/replicasets/test-rollover-deployment-854595fc44,UID:042ca8dc-6684-42e0-9014-4665ea84141d,ResourceVersion:22447129,Generation:2,CreationTimestamp:2020-01-30 14:12:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d2ccce86-fc3a-4a0b-a867-8644678bf436 0xc002e2d387 0xc002e2d388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 30 14:13:16.376: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 30 14:13:16.376: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5116,SelfLink:/apis/apps/v1/namespaces/deployment-5116/replicasets/test-rollover-controller,UID:56447fdc-629f-48b4-a6f1-5aa5e80a9e55,ResourceVersion:22447139,Generation:2,CreationTimestamp:2020-01-30 14:12:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d2ccce86-fc3a-4a0b-a867-8644678bf436 0xc002e2d2b7 0xc002e2d2b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 30 14:13:16.376: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5116,SelfLink:/apis/apps/v1/namespaces/deployment-5116/replicasets/test-rollover-deployment-9b8b997cf,UID:1263f8b6-b029-450e-a275-6d3b545919ec,ResourceVersion:22447087,Generation:2,CreationTimestamp:2020-01-30 14:12:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d2ccce86-fc3a-4a0b-a867-8644678bf436 0xc002e2d450 0xc002e2d451}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 30 14:13:16.381: INFO: Pod "test-rollover-deployment-854595fc44-cbl96" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-cbl96,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5116,SelfLink:/api/v1/namespaces/deployment-5116/pods/test-rollover-deployment-854595fc44-cbl96,UID:64537e07-130e-4a6c-8759-28d76e1e351d,ResourceVersion:22447113,Generation:0,CreationTimestamp:2020-01-30 14:12:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 042ca8dc-6684-42e0-9014-4665ea84141d 0xc002687d37 0xc002687d38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mmgr8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mmgr8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-mmgr8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002687fa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002687fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 14:12:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 14:13:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 14:13:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 14:12:56 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-30 14:12:56 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-30 14:13:05 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a6ff65473bb6287835521c846032329df512a76a71bdc0f6999540821361c3ee}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:13:16.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5116" for this suite. Jan 30 14:13:22.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:13:22.541: INFO: namespace deployment-5116 deletion completed in 6.157055306s • [SLOW TEST:39.486 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:13:22.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-41117139-a787-4018-9f6b-e4d507d26f93 STEP: Creating a pod to test consume secrets Jan 30 14:13:22.763: INFO: Waiting up to 5m0s for pod "pod-secrets-43b42ef6-2f09-436f-a766-238fd4307552" in namespace "secrets-4333" to be "success or failure" Jan 30 14:13:22.772: INFO: Pod "pod-secrets-43b42ef6-2f09-436f-a766-238fd4307552": Phase="Pending", Reason="", readiness=false. Elapsed: 8.707975ms Jan 30 14:13:24.794: INFO: Pod "pod-secrets-43b42ef6-2f09-436f-a766-238fd4307552": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030415039s Jan 30 14:13:26.810: INFO: Pod "pod-secrets-43b42ef6-2f09-436f-a766-238fd4307552": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046669991s Jan 30 14:13:28.818: INFO: Pod "pod-secrets-43b42ef6-2f09-436f-a766-238fd4307552": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054888062s Jan 30 14:13:30.831: INFO: Pod "pod-secrets-43b42ef6-2f09-436f-a766-238fd4307552": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067271942s STEP: Saw pod success Jan 30 14:13:30.831: INFO: Pod "pod-secrets-43b42ef6-2f09-436f-a766-238fd4307552" satisfied condition "success or failure" Jan 30 14:13:30.836: INFO: Trying to get logs from node iruya-node pod pod-secrets-43b42ef6-2f09-436f-a766-238fd4307552 container secret-env-test: STEP: delete the pod Jan 30 14:13:30.898: INFO: Waiting for pod pod-secrets-43b42ef6-2f09-436f-a766-238fd4307552 to disappear Jan 30 14:13:30.920: INFO: Pod pod-secrets-43b42ef6-2f09-436f-a766-238fd4307552 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:13:30.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4333" for this suite. Jan 30 14:13:36.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:13:37.076: INFO: namespace secrets-4333 deletion completed in 6.146436887s • [SLOW TEST:14.534 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:13:37.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 30 14:13:37.180: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97afe41a-f296-4504-8f61-7b6b93256563" in namespace "projected-1332" to be "success or failure" Jan 30 14:13:37.191: INFO: Pod "downwardapi-volume-97afe41a-f296-4504-8f61-7b6b93256563": Phase="Pending", Reason="", readiness=false. Elapsed: 10.134871ms Jan 30 14:13:39.204: INFO: Pod "downwardapi-volume-97afe41a-f296-4504-8f61-7b6b93256563": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02392889s Jan 30 14:13:41.211: INFO: Pod "downwardapi-volume-97afe41a-f296-4504-8f61-7b6b93256563": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030485991s Jan 30 14:13:43.258: INFO: Pod "downwardapi-volume-97afe41a-f296-4504-8f61-7b6b93256563": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0778548s Jan 30 14:13:45.267: INFO: Pod "downwardapi-volume-97afe41a-f296-4504-8f61-7b6b93256563": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086513606s Jan 30 14:13:47.275: INFO: Pod "downwardapi-volume-97afe41a-f296-4504-8f61-7b6b93256563": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094430826s STEP: Saw pod success Jan 30 14:13:47.275: INFO: Pod "downwardapi-volume-97afe41a-f296-4504-8f61-7b6b93256563" satisfied condition "success or failure" Jan 30 14:13:47.281: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-97afe41a-f296-4504-8f61-7b6b93256563 container client-container: STEP: delete the pod Jan 30 14:13:47.558: INFO: Waiting for pod downwardapi-volume-97afe41a-f296-4504-8f61-7b6b93256563 to disappear Jan 30 14:13:47.571: INFO: Pod downwardapi-volume-97afe41a-f296-4504-8f61-7b6b93256563 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:13:47.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1332" for this suite. Jan 30 14:13:53.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:13:53.750: INFO: namespace projected-1332 deletion completed in 6.163456242s • [SLOW TEST:16.674 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:13:53.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 30 14:13:53.883: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:14:10.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9425" for this suite. Jan 30 14:14:32.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:14:32.770: INFO: namespace init-container-9425 deletion completed in 22.134795129s • [SLOW TEST:39.018 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:14:32.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 30 14:14:32.899: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 30 14:14:34.834: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 30 14:14:34.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9775" for this suite. Jan 30 14:14:47.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 30 14:14:47.577: INFO: namespace replication-controller-9775 deletion completed in 12.258745252s • [SLOW TEST:14.807 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 30 14:14:47.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 30 14:14:47.722: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 26.24092ms)
Jan 30 14:14:47.737: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.95711ms)
Jan 30 14:14:47.742: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.323053ms)
Jan 30 14:14:47.745: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.689086ms)
Jan 30 14:14:47.749: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.641384ms)
Jan 30 14:14:47.753: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.27487ms)
Jan 30 14:14:47.757: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.367845ms)
Jan 30 14:14:47.761: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.201698ms)
Jan 30 14:14:47.767: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.325356ms)
Jan 30 14:14:47.773: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.170831ms)
Jan 30 14:14:47.785: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.367953ms)
Jan 30 14:14:47.792: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.816029ms)
Jan 30 14:14:47.797: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.24852ms)
Jan 30 14:14:47.803: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.874883ms)
Jan 30 14:14:47.810: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.832321ms)
Jan 30 14:14:47.819: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.550201ms)
Jan 30 14:14:47.830: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.785379ms)
Jan 30 14:14:47.838: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.238694ms)
Jan 30 14:14:47.843: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.367503ms)
Jan 30 14:14:47.850: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.882907ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:14:47.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8682" for this suite.
Jan 30 14:14:53.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:14:54.018: INFO: namespace proxy-8682 deletion completed in 6.161001806s

• [SLOW TEST:6.440 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:14:54.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-937, will wait for the garbage collector to delete the pods
Jan 30 14:15:04.223: INFO: Deleting Job.batch foo took: 20.052702ms
Jan 30 14:15:04.524: INFO: Terminating Job.batch foo pods took: 301.269221ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:15:46.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-937" for this suite.
Jan 30 14:15:54.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:15:54.834: INFO: namespace job-937 deletion completed in 8.186582109s

• [SLOW TEST:60.815 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:15:54.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 30 14:15:54.959: INFO: Waiting up to 5m0s for pod "downward-api-95f2cf0c-d449-461e-a3d0-6458f399665f" in namespace "downward-api-1423" to be "success or failure"
Jan 30 14:15:54.964: INFO: Pod "downward-api-95f2cf0c-d449-461e-a3d0-6458f399665f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.907091ms
Jan 30 14:15:56.973: INFO: Pod "downward-api-95f2cf0c-d449-461e-a3d0-6458f399665f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01414132s
Jan 30 14:15:58.986: INFO: Pod "downward-api-95f2cf0c-d449-461e-a3d0-6458f399665f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026571315s
Jan 30 14:16:01.011: INFO: Pod "downward-api-95f2cf0c-d449-461e-a3d0-6458f399665f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051638173s
Jan 30 14:16:03.020: INFO: Pod "downward-api-95f2cf0c-d449-461e-a3d0-6458f399665f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061306445s
Jan 30 14:16:05.079: INFO: Pod "downward-api-95f2cf0c-d449-461e-a3d0-6458f399665f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.119791383s
STEP: Saw pod success
Jan 30 14:16:05.079: INFO: Pod "downward-api-95f2cf0c-d449-461e-a3d0-6458f399665f" satisfied condition "success or failure"
Jan 30 14:16:05.090: INFO: Trying to get logs from node iruya-node pod downward-api-95f2cf0c-d449-461e-a3d0-6458f399665f container dapi-container: 
STEP: delete the pod
Jan 30 14:16:05.257: INFO: Waiting for pod downward-api-95f2cf0c-d449-461e-a3d0-6458f399665f to disappear
Jan 30 14:16:05.266: INFO: Pod downward-api-95f2cf0c-d449-461e-a3d0-6458f399665f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:16:05.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1423" for this suite.
Jan 30 14:16:11.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:16:11.435: INFO: namespace downward-api-1423 deletion completed in 6.162430297s

• [SLOW TEST:16.599 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:16:11.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-7c01463c-8fd1-469d-806f-fd8d93cc7ed2
STEP: Creating a pod to test consume secrets
Jan 30 14:16:11.543: INFO: Waiting up to 5m0s for pod "pod-secrets-64431448-a724-4f86-a372-45d67f3a7294" in namespace "secrets-3221" to be "success or failure"
Jan 30 14:16:11.550: INFO: Pod "pod-secrets-64431448-a724-4f86-a372-45d67f3a7294": Phase="Pending", Reason="", readiness=false. Elapsed: 5.974811ms
Jan 30 14:16:13.567: INFO: Pod "pod-secrets-64431448-a724-4f86-a372-45d67f3a7294": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023755983s
Jan 30 14:16:15.582: INFO: Pod "pod-secrets-64431448-a724-4f86-a372-45d67f3a7294": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037907569s
Jan 30 14:16:17.588: INFO: Pod "pod-secrets-64431448-a724-4f86-a372-45d67f3a7294": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044405578s
Jan 30 14:16:19.597: INFO: Pod "pod-secrets-64431448-a724-4f86-a372-45d67f3a7294": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053071527s
Jan 30 14:16:21.605: INFO: Pod "pod-secrets-64431448-a724-4f86-a372-45d67f3a7294": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061561291s
STEP: Saw pod success
Jan 30 14:16:21.605: INFO: Pod "pod-secrets-64431448-a724-4f86-a372-45d67f3a7294" satisfied condition "success or failure"
Jan 30 14:16:21.610: INFO: Trying to get logs from node iruya-node pod pod-secrets-64431448-a724-4f86-a372-45d67f3a7294 container secret-volume-test: 
STEP: delete the pod
Jan 30 14:16:21.964: INFO: Waiting for pod pod-secrets-64431448-a724-4f86-a372-45d67f3a7294 to disappear
Jan 30 14:16:21.989: INFO: Pod pod-secrets-64431448-a724-4f86-a372-45d67f3a7294 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:16:21.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3221" for this suite.
Jan 30 14:16:28.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:16:28.375: INFO: namespace secrets-3221 deletion completed in 6.376434828s

• [SLOW TEST:16.939 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:16:28.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan 30 14:16:28.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8160'
Jan 30 14:16:30.506: INFO: stderr: ""
Jan 30 14:16:30.507: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 30 14:16:30.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8160'
Jan 30 14:16:30.738: INFO: stderr: ""
Jan 30 14:16:30.739: INFO: stdout: "update-demo-nautilus-8265z update-demo-nautilus-fpx58 "
Jan 30 14:16:30.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8265z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8160'
Jan 30 14:16:30.939: INFO: stderr: ""
Jan 30 14:16:30.939: INFO: stdout: ""
Jan 30 14:16:30.939: INFO: update-demo-nautilus-8265z is created but not running
Jan 30 14:16:35.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8160'
Jan 30 14:16:37.268: INFO: stderr: ""
Jan 30 14:16:37.268: INFO: stdout: "update-demo-nautilus-8265z update-demo-nautilus-fpx58 "
Jan 30 14:16:37.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8265z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8160'
Jan 30 14:16:37.673: INFO: stderr: ""
Jan 30 14:16:37.673: INFO: stdout: ""
Jan 30 14:16:37.673: INFO: update-demo-nautilus-8265z is created but not running
Jan 30 14:16:42.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8160'
Jan 30 14:16:42.792: INFO: stderr: ""
Jan 30 14:16:42.792: INFO: stdout: "update-demo-nautilus-8265z update-demo-nautilus-fpx58 "
Jan 30 14:16:42.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8265z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8160'
Jan 30 14:16:42.893: INFO: stderr: ""
Jan 30 14:16:42.893: INFO: stdout: "true"
Jan 30 14:16:42.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8265z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8160'
Jan 30 14:16:43.007: INFO: stderr: ""
Jan 30 14:16:43.007: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 14:16:43.007: INFO: validating pod update-demo-nautilus-8265z
Jan 30 14:16:43.031: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 14:16:43.031: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 14:16:43.031: INFO: update-demo-nautilus-8265z is verified up and running
Jan 30 14:16:43.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpx58 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8160'
Jan 30 14:16:43.131: INFO: stderr: ""
Jan 30 14:16:43.131: INFO: stdout: "true"
Jan 30 14:16:43.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fpx58 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8160'
Jan 30 14:16:43.228: INFO: stderr: ""
Jan 30 14:16:43.228: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 14:16:43.228: INFO: validating pod update-demo-nautilus-fpx58
Jan 30 14:16:43.240: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 14:16:43.240: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 14:16:43.240: INFO: update-demo-nautilus-fpx58 is verified up and running
STEP: using delete to clean up resources
Jan 30 14:16:43.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8160'
Jan 30 14:16:43.418: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 30 14:16:43.418: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 30 14:16:43.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8160'
Jan 30 14:16:43.535: INFO: stderr: "No resources found.\n"
Jan 30 14:16:43.535: INFO: stdout: ""
Jan 30 14:16:43.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8160 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 30 14:16:43.907: INFO: stderr: ""
Jan 30 14:16:43.907: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:16:43.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8160" for this suite.
Jan 30 14:17:06.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:17:06.841: INFO: namespace kubectl-8160 deletion completed in 22.88808123s

• [SLOW TEST:38.465 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:17:06.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 30 14:17:31.082: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1632 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 14:17:31.082: INFO: >>> kubeConfig: /root/.kube/config
I0130 14:17:31.175687       8 log.go:172] (0xc000ddaa50) (0xc0012f5040) Create stream
I0130 14:17:31.175822       8 log.go:172] (0xc000ddaa50) (0xc0012f5040) Stream added, broadcasting: 1
I0130 14:17:31.185737       8 log.go:172] (0xc000ddaa50) Reply frame received for 1
I0130 14:17:31.185790       8 log.go:172] (0xc000ddaa50) (0xc0012f5180) Create stream
I0130 14:17:31.185801       8 log.go:172] (0xc000ddaa50) (0xc0012f5180) Stream added, broadcasting: 3
I0130 14:17:31.189546       8 log.go:172] (0xc000ddaa50) Reply frame received for 3
I0130 14:17:31.189738       8 log.go:172] (0xc000ddaa50) (0xc0012f52c0) Create stream
I0130 14:17:31.189751       8 log.go:172] (0xc000ddaa50) (0xc0012f52c0) Stream added, broadcasting: 5
I0130 14:17:31.191938       8 log.go:172] (0xc000ddaa50) Reply frame received for 5
I0130 14:17:31.342672       8 log.go:172] (0xc000ddaa50) Data frame received for 3
I0130 14:17:31.342795       8 log.go:172] (0xc0012f5180) (3) Data frame handling
I0130 14:17:31.342866       8 log.go:172] (0xc0012f5180) (3) Data frame sent
I0130 14:17:31.497107       8 log.go:172] (0xc000ddaa50) (0xc0012f52c0) Stream removed, broadcasting: 5
I0130 14:17:31.497380       8 log.go:172] (0xc000ddaa50) Data frame received for 1
I0130 14:17:31.497424       8 log.go:172] (0xc000ddaa50) (0xc0012f5180) Stream removed, broadcasting: 3
I0130 14:17:31.497480       8 log.go:172] (0xc0012f5040) (1) Data frame handling
I0130 14:17:31.497552       8 log.go:172] (0xc0012f5040) (1) Data frame sent
I0130 14:17:31.497582       8 log.go:172] (0xc000ddaa50) (0xc0012f5040) Stream removed, broadcasting: 1
I0130 14:17:31.497620       8 log.go:172] (0xc000ddaa50) Go away received
I0130 14:17:31.498448       8 log.go:172] (0xc000ddaa50) (0xc0012f5040) Stream removed, broadcasting: 1
I0130 14:17:31.498476       8 log.go:172] (0xc000ddaa50) (0xc0012f5180) Stream removed, broadcasting: 3
I0130 14:17:31.498523       8 log.go:172] (0xc000ddaa50) (0xc0012f52c0) Stream removed, broadcasting: 5
Jan 30 14:17:31.498: INFO: Exec stderr: ""
Jan 30 14:17:31.498: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1632 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 14:17:31.499: INFO: >>> kubeConfig: /root/.kube/config
I0130 14:17:31.582144       8 log.go:172] (0xc000c928f0) (0xc0012be820) Create stream
I0130 14:17:31.582287       8 log.go:172] (0xc000c928f0) (0xc0012be820) Stream added, broadcasting: 1
I0130 14:17:31.590193       8 log.go:172] (0xc000c928f0) Reply frame received for 1
I0130 14:17:31.590232       8 log.go:172] (0xc000c928f0) (0xc0012f5360) Create stream
I0130 14:17:31.590246       8 log.go:172] (0xc000c928f0) (0xc0012f5360) Stream added, broadcasting: 3
I0130 14:17:31.591682       8 log.go:172] (0xc000c928f0) Reply frame received for 3
I0130 14:17:31.591711       8 log.go:172] (0xc000c928f0) (0xc000393b80) Create stream
I0130 14:17:31.591722       8 log.go:172] (0xc000c928f0) (0xc000393b80) Stream added, broadcasting: 5
I0130 14:17:31.593127       8 log.go:172] (0xc000c928f0) Reply frame received for 5
I0130 14:17:31.704284       8 log.go:172] (0xc000c928f0) Data frame received for 3
I0130 14:17:31.704429       8 log.go:172] (0xc0012f5360) (3) Data frame handling
I0130 14:17:31.704475       8 log.go:172] (0xc0012f5360) (3) Data frame sent
I0130 14:17:31.903785       8 log.go:172] (0xc000c928f0) (0xc0012f5360) Stream removed, broadcasting: 3
I0130 14:17:31.903985       8 log.go:172] (0xc000c928f0) Data frame received for 1
I0130 14:17:31.904012       8 log.go:172] (0xc000c928f0) (0xc000393b80) Stream removed, broadcasting: 5
I0130 14:17:31.904082       8 log.go:172] (0xc0012be820) (1) Data frame handling
I0130 14:17:31.904121       8 log.go:172] (0xc0012be820) (1) Data frame sent
I0130 14:17:31.904152       8 log.go:172] (0xc000c928f0) (0xc0012be820) Stream removed, broadcasting: 1
I0130 14:17:31.904189       8 log.go:172] (0xc000c928f0) Go away received
I0130 14:17:31.904824       8 log.go:172] (0xc000c928f0) (0xc0012be820) Stream removed, broadcasting: 1
I0130 14:17:31.904838       8 log.go:172] (0xc000c928f0) (0xc0012f5360) Stream removed, broadcasting: 3
I0130 14:17:31.904844       8 log.go:172] (0xc000c928f0) (0xc000393b80) Stream removed, broadcasting: 5
Jan 30 14:17:31.904: INFO: Exec stderr: ""
Jan 30 14:17:31.905: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1632 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 14:17:31.905: INFO: >>> kubeConfig: /root/.kube/config
I0130 14:17:32.002129       8 log.go:172] (0xc002fbabb0) (0xc000556960) Create stream
I0130 14:17:32.002247       8 log.go:172] (0xc002fbabb0) (0xc000556960) Stream added, broadcasting: 1
I0130 14:17:32.011078       8 log.go:172] (0xc002fbabb0) Reply frame received for 1
I0130 14:17:32.011121       8 log.go:172] (0xc002fbabb0) (0xc001dedcc0) Create stream
I0130 14:17:32.011134       8 log.go:172] (0xc002fbabb0) (0xc001dedcc0) Stream added, broadcasting: 3
I0130 14:17:32.013079       8 log.go:172] (0xc002fbabb0) Reply frame received for 3
I0130 14:17:32.013101       8 log.go:172] (0xc002fbabb0) (0xc0012bea00) Create stream
I0130 14:17:32.013109       8 log.go:172] (0xc002fbabb0) (0xc0012bea00) Stream added, broadcasting: 5
I0130 14:17:32.014580       8 log.go:172] (0xc002fbabb0) Reply frame received for 5
I0130 14:17:32.119918       8 log.go:172] (0xc002fbabb0) Data frame received for 3
I0130 14:17:32.120148       8 log.go:172] (0xc001dedcc0) (3) Data frame handling
I0130 14:17:32.120188       8 log.go:172] (0xc001dedcc0) (3) Data frame sent
I0130 14:17:32.330232       8 log.go:172] (0xc002fbabb0) Data frame received for 1
I0130 14:17:32.330383       8 log.go:172] (0xc002fbabb0) (0xc0012bea00) Stream removed, broadcasting: 5
I0130 14:17:32.330456       8 log.go:172] (0xc000556960) (1) Data frame handling
I0130 14:17:32.330487       8 log.go:172] (0xc002fbabb0) (0xc001dedcc0) Stream removed, broadcasting: 3
I0130 14:17:32.330522       8 log.go:172] (0xc000556960) (1) Data frame sent
I0130 14:17:32.330535       8 log.go:172] (0xc002fbabb0) (0xc000556960) Stream removed, broadcasting: 1
I0130 14:17:32.330569       8 log.go:172] (0xc002fbabb0) Go away received
I0130 14:17:32.330919       8 log.go:172] (0xc002fbabb0) (0xc000556960) Stream removed, broadcasting: 1
I0130 14:17:32.330940       8 log.go:172] (0xc002fbabb0) (0xc001dedcc0) Stream removed, broadcasting: 3
I0130 14:17:32.330963       8 log.go:172] (0xc002fbabb0) (0xc0012bea00) Stream removed, broadcasting: 5
Jan 30 14:17:32.331: INFO: Exec stderr: ""
Jan 30 14:17:32.331: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1632 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 14:17:32.331: INFO: >>> kubeConfig: /root/.kube/config
I0130 14:17:32.405979       8 log.go:172] (0xc0019ccbb0) (0xc001fbe0a0) Create stream
I0130 14:17:32.406221       8 log.go:172] (0xc0019ccbb0) (0xc001fbe0a0) Stream added, broadcasting: 1
I0130 14:17:32.414684       8 log.go:172] (0xc0019ccbb0) Reply frame received for 1
I0130 14:17:32.414738       8 log.go:172] (0xc0019ccbb0) (0xc000556aa0) Create stream
I0130 14:17:32.414750       8 log.go:172] (0xc0019ccbb0) (0xc000556aa0) Stream added, broadcasting: 3
I0130 14:17:32.417053       8 log.go:172] (0xc0019ccbb0) Reply frame received for 3
I0130 14:17:32.417073       8 log.go:172] (0xc0019ccbb0) (0xc0012beb40) Create stream
I0130 14:17:32.417080       8 log.go:172] (0xc0019ccbb0) (0xc0012beb40) Stream added, broadcasting: 5
I0130 14:17:32.419561       8 log.go:172] (0xc0019ccbb0) Reply frame received for 5
I0130 14:17:32.584050       8 log.go:172] (0xc0019ccbb0) Data frame received for 3
I0130 14:17:32.584244       8 log.go:172] (0xc000556aa0) (3) Data frame handling
I0130 14:17:32.584262       8 log.go:172] (0xc000556aa0) (3) Data frame sent
I0130 14:17:32.840615       8 log.go:172] (0xc0019ccbb0) Data frame received for 1
I0130 14:17:32.840903       8 log.go:172] (0xc0019ccbb0) (0xc000556aa0) Stream removed, broadcasting: 3
I0130 14:17:32.841087       8 log.go:172] (0xc001fbe0a0) (1) Data frame handling
I0130 14:17:32.841130       8 log.go:172] (0xc001fbe0a0) (1) Data frame sent
I0130 14:17:32.841188       8 log.go:172] (0xc0019ccbb0) (0xc0012beb40) Stream removed, broadcasting: 5
I0130 14:17:32.841247       8 log.go:172] (0xc0019ccbb0) (0xc001fbe0a0) Stream removed, broadcasting: 1
I0130 14:17:32.841272       8 log.go:172] (0xc0019ccbb0) Go away received
I0130 14:17:32.842610       8 log.go:172] (0xc0019ccbb0) (0xc001fbe0a0) Stream removed, broadcasting: 1
I0130 14:17:32.842631       8 log.go:172] (0xc0019ccbb0) (0xc000556aa0) Stream removed, broadcasting: 3
I0130 14:17:32.842642       8 log.go:172] (0xc0019ccbb0) (0xc0012beb40) Stream removed, broadcasting: 5
Jan 30 14:17:32.842: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 30 14:17:32.843: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1632 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 14:17:32.843: INFO: >>> kubeConfig: /root/.kube/config
I0130 14:17:32.960204       8 log.go:172] (0xc000bead10) (0xc0011a4820) Create stream
I0130 14:17:32.960346       8 log.go:172] (0xc000bead10) (0xc0011a4820) Stream added, broadcasting: 1
I0130 14:17:32.973773       8 log.go:172] (0xc000bead10) Reply frame received for 1
I0130 14:17:32.973933       8 log.go:172] (0xc000bead10) (0xc0012bebe0) Create stream
I0130 14:17:32.973949       8 log.go:172] (0xc000bead10) (0xc0012bebe0) Stream added, broadcasting: 3
I0130 14:17:32.976507       8 log.go:172] (0xc000bead10) Reply frame received for 3
I0130 14:17:32.976602       8 log.go:172] (0xc000bead10) (0xc0011a4960) Create stream
I0130 14:17:32.976612       8 log.go:172] (0xc000bead10) (0xc0011a4960) Stream added, broadcasting: 5
I0130 14:17:32.978184       8 log.go:172] (0xc000bead10) Reply frame received for 5
I0130 14:17:33.073202       8 log.go:172] (0xc000bead10) Data frame received for 3
I0130 14:17:33.073305       8 log.go:172] (0xc0012bebe0) (3) Data frame handling
I0130 14:17:33.073318       8 log.go:172] (0xc0012bebe0) (3) Data frame sent
I0130 14:17:33.225631       8 log.go:172] (0xc000bead10) (0xc0012bebe0) Stream removed, broadcasting: 3
I0130 14:17:33.225746       8 log.go:172] (0xc000bead10) Data frame received for 1
I0130 14:17:33.225764       8 log.go:172] (0xc0011a4820) (1) Data frame handling
I0130 14:17:33.225775       8 log.go:172] (0xc0011a4820) (1) Data frame sent
I0130 14:17:33.226028       8 log.go:172] (0xc000bead10) (0xc0011a4820) Stream removed, broadcasting: 1
I0130 14:17:33.226134       8 log.go:172] (0xc000bead10) (0xc0011a4960) Stream removed, broadcasting: 5
I0130 14:17:33.226172       8 log.go:172] (0xc000bead10) (0xc0011a4820) Stream removed, broadcasting: 1
I0130 14:17:33.226201       8 log.go:172] (0xc000bead10) (0xc0012bebe0) Stream removed, broadcasting: 3
I0130 14:17:33.226207       8 log.go:172] (0xc000bead10) (0xc0011a4960) Stream removed, broadcasting: 5
Jan 30 14:17:33.226: INFO: Exec stderr: ""
Jan 30 14:17:33.226: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1632 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 14:17:33.226: INFO: >>> kubeConfig: /root/.kube/config
I0130 14:17:33.227277       8 log.go:172] (0xc000bead10) Go away received
I0130 14:17:33.286414       8 log.go:172] (0xc001e4c0b0) (0xc0012bf180) Create stream
I0130 14:17:33.286627       8 log.go:172] (0xc001e4c0b0) (0xc0012bf180) Stream added, broadcasting: 1
I0130 14:17:33.293803       8 log.go:172] (0xc001e4c0b0) Reply frame received for 1
I0130 14:17:33.293887       8 log.go:172] (0xc001e4c0b0) (0xc0012f5540) Create stream
I0130 14:17:33.293904       8 log.go:172] (0xc001e4c0b0) (0xc0012f5540) Stream added, broadcasting: 3
I0130 14:17:33.297235       8 log.go:172] (0xc001e4c0b0) Reply frame received for 3
I0130 14:17:33.297264       8 log.go:172] (0xc001e4c0b0) (0xc001fbe140) Create stream
I0130 14:17:33.297277       8 log.go:172] (0xc001e4c0b0) (0xc001fbe140) Stream added, broadcasting: 5
I0130 14:17:33.300517       8 log.go:172] (0xc001e4c0b0) Reply frame received for 5
I0130 14:17:33.395629       8 log.go:172] (0xc001e4c0b0) Data frame received for 3
I0130 14:17:33.395724       8 log.go:172] (0xc0012f5540) (3) Data frame handling
I0130 14:17:33.395754       8 log.go:172] (0xc0012f5540) (3) Data frame sent
I0130 14:17:33.526959       8 log.go:172] (0xc001e4c0b0) Data frame received for 1
I0130 14:17:33.527173       8 log.go:172] (0xc001e4c0b0) (0xc001fbe140) Stream removed, broadcasting: 5
I0130 14:17:33.527283       8 log.go:172] (0xc0012bf180) (1) Data frame handling
I0130 14:17:33.527316       8 log.go:172] (0xc0012bf180) (1) Data frame sent
I0130 14:17:33.527371       8 log.go:172] (0xc001e4c0b0) (0xc0012f5540) Stream removed, broadcasting: 3
I0130 14:17:33.527436       8 log.go:172] (0xc001e4c0b0) (0xc0012bf180) Stream removed, broadcasting: 1
I0130 14:17:33.527450       8 log.go:172] (0xc001e4c0b0) Go away received
I0130 14:17:33.528499       8 log.go:172] (0xc001e4c0b0) (0xc0012bf180) Stream removed, broadcasting: 1
I0130 14:17:33.528511       8 log.go:172] (0xc001e4c0b0) (0xc0012f5540) Stream removed, broadcasting: 3
I0130 14:17:33.528675       8 log.go:172] (0xc001e4c0b0) (0xc001fbe140) Stream removed, broadcasting: 5
Jan 30 14:17:33.528: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 30 14:17:33.528: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1632 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 14:17:33.529: INFO: >>> kubeConfig: /root/.kube/config
I0130 14:17:33.613155       8 log.go:172] (0xc001ef62c0) (0xc0012f5c20) Create stream
I0130 14:17:33.613533       8 log.go:172] (0xc001ef62c0) (0xc0012f5c20) Stream added, broadcasting: 1
I0130 14:17:33.675009       8 log.go:172] (0xc001ef62c0) Reply frame received for 1
I0130 14:17:33.675570       8 log.go:172] (0xc001ef62c0) (0xc000556c80) Create stream
I0130 14:17:33.675633       8 log.go:172] (0xc001ef62c0) (0xc000556c80) Stream added, broadcasting: 3
I0130 14:17:33.681457       8 log.go:172] (0xc001ef62c0) Reply frame received for 3
I0130 14:17:33.681494       8 log.go:172] (0xc001ef62c0) (0xc0012f5d60) Create stream
I0130 14:17:33.681513       8 log.go:172] (0xc001ef62c0) (0xc0012f5d60) Stream added, broadcasting: 5
I0130 14:17:33.685914       8 log.go:172] (0xc001ef62c0) Reply frame received for 5
I0130 14:17:33.972549       8 log.go:172] (0xc001ef62c0) Data frame received for 3
I0130 14:17:33.972997       8 log.go:172] (0xc000556c80) (3) Data frame handling
I0130 14:17:33.973114       8 log.go:172] (0xc000556c80) (3) Data frame sent
I0130 14:17:34.169978       8 log.go:172] (0xc001ef62c0) (0xc0012f5d60) Stream removed, broadcasting: 5
I0130 14:17:34.170118       8 log.go:172] (0xc001ef62c0) Data frame received for 1
I0130 14:17:34.170142       8 log.go:172] (0xc001ef62c0) (0xc000556c80) Stream removed, broadcasting: 3
I0130 14:17:34.170233       8 log.go:172] (0xc0012f5c20) (1) Data frame handling
I0130 14:17:34.170256       8 log.go:172] (0xc0012f5c20) (1) Data frame sent
I0130 14:17:34.170266       8 log.go:172] (0xc001ef62c0) (0xc0012f5c20) Stream removed, broadcasting: 1
I0130 14:17:34.170286       8 log.go:172] (0xc001ef62c0) Go away received
I0130 14:17:34.170868       8 log.go:172] (0xc001ef62c0) (0xc0012f5c20) Stream removed, broadcasting: 1
I0130 14:17:34.170899       8 log.go:172] (0xc001ef62c0) (0xc000556c80) Stream removed, broadcasting: 3
I0130 14:17:34.170917       8 log.go:172] (0xc001ef62c0) (0xc0012f5d60) Stream removed, broadcasting: 5
Jan 30 14:17:34.170: INFO: Exec stderr: ""
Jan 30 14:17:34.171: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1632 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 14:17:34.171: INFO: >>> kubeConfig: /root/.kube/config
I0130 14:17:34.230191       8 log.go:172] (0xc000bebef0) (0xc0011a4dc0) Create stream
I0130 14:17:34.230326       8 log.go:172] (0xc000bebef0) (0xc0011a4dc0) Stream added, broadcasting: 1
I0130 14:17:34.237252       8 log.go:172] (0xc000bebef0) Reply frame received for 1
I0130 14:17:34.237313       8 log.go:172] (0xc000bebef0) (0xc000556e60) Create stream
I0130 14:17:34.237322       8 log.go:172] (0xc000bebef0) (0xc000556e60) Stream added, broadcasting: 3
I0130 14:17:34.240745       8 log.go:172] (0xc000bebef0) Reply frame received for 3
I0130 14:17:34.240773       8 log.go:172] (0xc000bebef0) (0xc0012bf2c0) Create stream
I0130 14:17:34.240781       8 log.go:172] (0xc000bebef0) (0xc0012bf2c0) Stream added, broadcasting: 5
I0130 14:17:34.242189       8 log.go:172] (0xc000bebef0) Reply frame received for 5
I0130 14:17:34.357236       8 log.go:172] (0xc000bebef0) Data frame received for 3
I0130 14:17:34.357310       8 log.go:172] (0xc000556e60) (3) Data frame handling
I0130 14:17:34.357342       8 log.go:172] (0xc000556e60) (3) Data frame sent
I0130 14:17:34.500271       8 log.go:172] (0xc000bebef0) Data frame received for 1
I0130 14:17:34.500401       8 log.go:172] (0xc000bebef0) (0xc000556e60) Stream removed, broadcasting: 3
I0130 14:17:34.500522       8 log.go:172] (0xc0011a4dc0) (1) Data frame handling
I0130 14:17:34.500562       8 log.go:172] (0xc0011a4dc0) (1) Data frame sent
I0130 14:17:34.500577       8 log.go:172] (0xc000bebef0) (0xc0011a4dc0) Stream removed, broadcasting: 1
I0130 14:17:34.502452       8 log.go:172] (0xc000bebef0) (0xc0012bf2c0) Stream removed, broadcasting: 5
I0130 14:17:34.502573       8 log.go:172] (0xc000bebef0) (0xc0011a4dc0) Stream removed, broadcasting: 1
I0130 14:17:34.502603       8 log.go:172] (0xc000bebef0) (0xc000556e60) Stream removed, broadcasting: 3
I0130 14:17:34.502627       8 log.go:172] (0xc000bebef0) (0xc0012bf2c0) Stream removed, broadcasting: 5
Jan 30 14:17:34.502: INFO: Exec stderr: ""
Jan 30 14:17:34.502: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1632 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 14:17:34.503: INFO: >>> kubeConfig: /root/.kube/config
I0130 14:17:34.503235       8 log.go:172] (0xc000bebef0) Go away received
I0130 14:17:34.581461       8 log.go:172] (0xc0019cdad0) (0xc001fbe3c0) Create stream
I0130 14:17:34.581782       8 log.go:172] (0xc0019cdad0) (0xc001fbe3c0) Stream added, broadcasting: 1
I0130 14:17:34.589293       8 log.go:172] (0xc0019cdad0) Reply frame received for 1
I0130 14:17:34.589346       8 log.go:172] (0xc0019cdad0) (0xc000556f00) Create stream
I0130 14:17:34.589359       8 log.go:172] (0xc0019cdad0) (0xc000556f00) Stream added, broadcasting: 3
I0130 14:17:34.590985       8 log.go:172] (0xc0019cdad0) Reply frame received for 3
I0130 14:17:34.591086       8 log.go:172] (0xc0019cdad0) (0xc0011a50e0) Create stream
I0130 14:17:34.591098       8 log.go:172] (0xc0019cdad0) (0xc0011a50e0) Stream added, broadcasting: 5
I0130 14:17:34.592251       8 log.go:172] (0xc0019cdad0) Reply frame received for 5
I0130 14:17:34.666516       8 log.go:172] (0xc0019cdad0) Data frame received for 3
I0130 14:17:34.666614       8 log.go:172] (0xc000556f00) (3) Data frame handling
I0130 14:17:34.666630       8 log.go:172] (0xc000556f00) (3) Data frame sent
I0130 14:17:34.759747       8 log.go:172] (0xc0019cdad0) Data frame received for 1
I0130 14:17:34.759894       8 log.go:172] (0xc0019cdad0) (0xc000556f00) Stream removed, broadcasting: 3
I0130 14:17:34.760187       8 log.go:172] (0xc001fbe3c0) (1) Data frame handling
I0130 14:17:34.760290       8 log.go:172] (0xc001fbe3c0) (1) Data frame sent
I0130 14:17:34.760305       8 log.go:172] (0xc0019cdad0) (0xc0011a50e0) Stream removed, broadcasting: 5
I0130 14:17:34.760363       8 log.go:172] (0xc0019cdad0) (0xc001fbe3c0) Stream removed, broadcasting: 1
I0130 14:17:34.760393       8 log.go:172] (0xc0019cdad0) Go away received
I0130 14:17:34.760688       8 log.go:172] (0xc0019cdad0) (0xc001fbe3c0) Stream removed, broadcasting: 1
I0130 14:17:34.760700       8 log.go:172] (0xc0019cdad0) (0xc000556f00) Stream removed, broadcasting: 3
I0130 14:17:34.760706       8 log.go:172] (0xc0019cdad0) (0xc0011a50e0) Stream removed, broadcasting: 5
Jan 30 14:17:34.760: INFO: Exec stderr: ""
Jan 30 14:17:34.760: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1632 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 14:17:34.760: INFO: >>> kubeConfig: /root/.kube/config
I0130 14:17:34.810041       8 log.go:172] (0xc002568580) (0xc001fbe780) Create stream
I0130 14:17:34.810091       8 log.go:172] (0xc002568580) (0xc001fbe780) Stream added, broadcasting: 1
I0130 14:17:34.819528       8 log.go:172] (0xc002568580) Reply frame received for 1
I0130 14:17:34.819564       8 log.go:172] (0xc002568580) (0xc0012f5e00) Create stream
I0130 14:17:34.819580       8 log.go:172] (0xc002568580) (0xc0012f5e00) Stream added, broadcasting: 3
I0130 14:17:34.822813       8 log.go:172] (0xc002568580) Reply frame received for 3
I0130 14:17:34.822844       8 log.go:172] (0xc002568580) (0xc00219e000) Create stream
I0130 14:17:34.822853       8 log.go:172] (0xc002568580) (0xc00219e000) Stream added, broadcasting: 5
I0130 14:17:34.827555       8 log.go:172] (0xc002568580) Reply frame received for 5
I0130 14:17:34.914624       8 log.go:172] (0xc002568580) Data frame received for 3
I0130 14:17:34.914761       8 log.go:172] (0xc0012f5e00) (3) Data frame handling
I0130 14:17:34.914826       8 log.go:172] (0xc0012f5e00) (3) Data frame sent
I0130 14:17:35.026688       8 log.go:172] (0xc002568580) Data frame received for 1
I0130 14:17:35.026907       8 log.go:172] (0xc002568580) (0xc0012f5e00) Stream removed, broadcasting: 3
I0130 14:17:35.026990       8 log.go:172] (0xc001fbe780) (1) Data frame handling
I0130 14:17:35.027044       8 log.go:172] (0xc001fbe780) (1) Data frame sent
I0130 14:17:35.027092       8 log.go:172] (0xc002568580) (0xc00219e000) Stream removed, broadcasting: 5
I0130 14:17:35.027133       8 log.go:172] (0xc002568580) (0xc001fbe780) Stream removed, broadcasting: 1
I0130 14:17:35.027166       8 log.go:172] (0xc002568580) Go away received
I0130 14:17:35.027517       8 log.go:172] (0xc002568580) (0xc001fbe780) Stream removed, broadcasting: 1
I0130 14:17:35.027537       8 log.go:172] (0xc002568580) (0xc0012f5e00) Stream removed, broadcasting: 3
I0130 14:17:35.027550       8 log.go:172] (0xc002568580) (0xc00219e000) Stream removed, broadcasting: 5
Jan 30 14:17:35.027: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:17:35.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-1632" for this suite.
Jan 30 14:18:37.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:18:37.163: INFO: namespace e2e-kubelet-etc-hosts-1632 deletion completed in 1m2.127458681s

• [SLOW TEST:90.321 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:18:37.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-cb445fff-c115-4ef3-bcb0-10c34714d75b
STEP: Creating configMap with name cm-test-opt-upd-94661a97-2913-401e-90f0-57467ef3064b
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-cb445fff-c115-4ef3-bcb0-10c34714d75b
STEP: Updating configmap cm-test-opt-upd-94661a97-2913-401e-90f0-57467ef3064b
STEP: Creating configMap with name cm-test-opt-create-22916372-9330-4fc9-8743-367e63ec6f71
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:18:51.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4144" for this suite.
Jan 30 14:19:15.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:19:15.822: INFO: namespace configmap-4144 deletion completed in 24.150524963s

• [SLOW TEST:38.658 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:19:15.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 30 14:19:15.914: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92ce33b2-9a8a-4852-b73b-851a8503a237" in namespace "downward-api-9756" to be "success or failure"
Jan 30 14:19:15.968: INFO: Pod "downwardapi-volume-92ce33b2-9a8a-4852-b73b-851a8503a237": Phase="Pending", Reason="", readiness=false. Elapsed: 53.954881ms
Jan 30 14:19:17.981: INFO: Pod "downwardapi-volume-92ce33b2-9a8a-4852-b73b-851a8503a237": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066896619s
Jan 30 14:19:19.991: INFO: Pod "downwardapi-volume-92ce33b2-9a8a-4852-b73b-851a8503a237": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077422418s
Jan 30 14:19:22.003: INFO: Pod "downwardapi-volume-92ce33b2-9a8a-4852-b73b-851a8503a237": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088751386s
Jan 30 14:19:24.013: INFO: Pod "downwardapi-volume-92ce33b2-9a8a-4852-b73b-851a8503a237": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099036631s
Jan 30 14:19:26.020: INFO: Pod "downwardapi-volume-92ce33b2-9a8a-4852-b73b-851a8503a237": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106397801s
STEP: Saw pod success
Jan 30 14:19:26.020: INFO: Pod "downwardapi-volume-92ce33b2-9a8a-4852-b73b-851a8503a237" satisfied condition "success or failure"
Jan 30 14:19:26.023: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-92ce33b2-9a8a-4852-b73b-851a8503a237 container client-container: 
STEP: delete the pod
Jan 30 14:19:26.195: INFO: Waiting for pod downwardapi-volume-92ce33b2-9a8a-4852-b73b-851a8503a237 to disappear
Jan 30 14:19:26.212: INFO: Pod downwardapi-volume-92ce33b2-9a8a-4852-b73b-851a8503a237 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:19:26.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9756" for this suite.
Jan 30 14:19:32.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:19:32.399: INFO: namespace downward-api-9756 deletion completed in 6.178403879s

• [SLOW TEST:16.577 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:19:32.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:19:40.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2369" for this suite.
Jan 30 14:19:46.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:19:46.736: INFO: namespace kubelet-test-2369 deletion completed in 6.194614541s

• [SLOW TEST:14.336 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:19:46.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 30 14:19:46.841: INFO: Waiting up to 5m0s for pod "downward-api-1441ad98-96ea-401a-9363-12e7a72f462e" in namespace "downward-api-2906" to be "success or failure"
Jan 30 14:19:46.845: INFO: Pod "downward-api-1441ad98-96ea-401a-9363-12e7a72f462e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.820987ms
Jan 30 14:19:48.871: INFO: Pod "downward-api-1441ad98-96ea-401a-9363-12e7a72f462e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030481759s
Jan 30 14:19:50.885: INFO: Pod "downward-api-1441ad98-96ea-401a-9363-12e7a72f462e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044537176s
Jan 30 14:19:52.900: INFO: Pod "downward-api-1441ad98-96ea-401a-9363-12e7a72f462e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058991762s
Jan 30 14:19:54.917: INFO: Pod "downward-api-1441ad98-96ea-401a-9363-12e7a72f462e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076318413s
Jan 30 14:19:56.924: INFO: Pod "downward-api-1441ad98-96ea-401a-9363-12e7a72f462e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082889096s
STEP: Saw pod success
Jan 30 14:19:56.924: INFO: Pod "downward-api-1441ad98-96ea-401a-9363-12e7a72f462e" satisfied condition "success or failure"
Jan 30 14:19:56.926: INFO: Trying to get logs from node iruya-node pod downward-api-1441ad98-96ea-401a-9363-12e7a72f462e container dapi-container: 
STEP: delete the pod
Jan 30 14:19:57.051: INFO: Waiting for pod downward-api-1441ad98-96ea-401a-9363-12e7a72f462e to disappear
Jan 30 14:19:57.060: INFO: Pod downward-api-1441ad98-96ea-401a-9363-12e7a72f462e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:19:57.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2906" for this suite.
Jan 30 14:20:03.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:20:03.258: INFO: namespace downward-api-2906 deletion completed in 6.190431123s

• [SLOW TEST:16.522 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:20:03.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Jan 30 14:20:03.371: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:20:03.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6132" for this suite.
Jan 30 14:20:09.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:20:09.742: INFO: namespace kubectl-6132 deletion completed in 6.20398243s

• [SLOW TEST:6.483 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:20:09.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-9c45e0c3-3253-4c53-8ad6-d55ba1021b56
STEP: Creating a pod to test consume configMaps
Jan 30 14:20:09.863: INFO: Waiting up to 5m0s for pod "pod-configmaps-b76ee11b-aa2f-4e5c-a774-1d676a347f5a" in namespace "configmap-5637" to be "success or failure"
Jan 30 14:20:09.875: INFO: Pod "pod-configmaps-b76ee11b-aa2f-4e5c-a774-1d676a347f5a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.742475ms
Jan 30 14:20:11.887: INFO: Pod "pod-configmaps-b76ee11b-aa2f-4e5c-a774-1d676a347f5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023776913s
Jan 30 14:20:13.904: INFO: Pod "pod-configmaps-b76ee11b-aa2f-4e5c-a774-1d676a347f5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040418572s
Jan 30 14:20:15.913: INFO: Pod "pod-configmaps-b76ee11b-aa2f-4e5c-a774-1d676a347f5a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049646162s
Jan 30 14:20:17.923: INFO: Pod "pod-configmaps-b76ee11b-aa2f-4e5c-a774-1d676a347f5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059520616s
STEP: Saw pod success
Jan 30 14:20:17.923: INFO: Pod "pod-configmaps-b76ee11b-aa2f-4e5c-a774-1d676a347f5a" satisfied condition "success or failure"
Jan 30 14:20:17.929: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b76ee11b-aa2f-4e5c-a774-1d676a347f5a container configmap-volume-test: 
STEP: delete the pod
Jan 30 14:20:18.006: INFO: Waiting for pod pod-configmaps-b76ee11b-aa2f-4e5c-a774-1d676a347f5a to disappear
Jan 30 14:20:18.011: INFO: Pod pod-configmaps-b76ee11b-aa2f-4e5c-a774-1d676a347f5a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:20:18.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5637" for this suite.
Jan 30 14:20:24.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:20:24.153: INFO: namespace configmap-5637 deletion completed in 6.136503288s

• [SLOW TEST:14.411 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:20:24.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 30 14:20:24.255: INFO: Waiting up to 5m0s for pod "pod-8f17365c-41bf-42b7-af1f-a80d6ec154b1" in namespace "emptydir-5429" to be "success or failure"
Jan 30 14:20:24.263: INFO: Pod "pod-8f17365c-41bf-42b7-af1f-a80d6ec154b1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.621789ms
Jan 30 14:20:26.275: INFO: Pod "pod-8f17365c-41bf-42b7-af1f-a80d6ec154b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019085589s
Jan 30 14:20:28.283: INFO: Pod "pod-8f17365c-41bf-42b7-af1f-a80d6ec154b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027031783s
Jan 30 14:20:30.308: INFO: Pod "pod-8f17365c-41bf-42b7-af1f-a80d6ec154b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052450725s
Jan 30 14:20:32.396: INFO: Pod "pod-8f17365c-41bf-42b7-af1f-a80d6ec154b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.139761382s
STEP: Saw pod success
Jan 30 14:20:32.397: INFO: Pod "pod-8f17365c-41bf-42b7-af1f-a80d6ec154b1" satisfied condition "success or failure"
Jan 30 14:20:32.422: INFO: Trying to get logs from node iruya-node pod pod-8f17365c-41bf-42b7-af1f-a80d6ec154b1 container test-container: 
STEP: delete the pod
Jan 30 14:20:32.568: INFO: Waiting for pod pod-8f17365c-41bf-42b7-af1f-a80d6ec154b1 to disappear
Jan 30 14:20:32.581: INFO: Pod pod-8f17365c-41bf-42b7-af1f-a80d6ec154b1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:20:32.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5429" for this suite.
Jan 30 14:20:38.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:20:38.739: INFO: namespace emptydir-5429 deletion completed in 6.122799918s

• [SLOW TEST:14.586 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:20:38.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 30 14:20:47.744: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:20:47.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3811" for this suite.
Jan 30 14:20:53.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:20:54.075: INFO: namespace container-runtime-3811 deletion completed in 6.239029373s

• [SLOW TEST:15.335 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:20:54.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Jan 30 14:20:54.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 30 14:20:54.504: INFO: stderr: ""
Jan 30 14:20:54.504: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:20:54.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6828" for this suite.
Jan 30 14:21:00.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:21:00.694: INFO: namespace kubectl-6828 deletion completed in 6.174673374s

• [SLOW TEST:6.618 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:21:00.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 30 14:21:00.833: INFO: Waiting up to 5m0s for pod "downwardapi-volume-26b2e1ed-2fcb-44b7-826e-4ef246809c7a" in namespace "projected-7327" to be "success or failure"
Jan 30 14:21:00.850: INFO: Pod "downwardapi-volume-26b2e1ed-2fcb-44b7-826e-4ef246809c7a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.011469ms
Jan 30 14:21:02.870: INFO: Pod "downwardapi-volume-26b2e1ed-2fcb-44b7-826e-4ef246809c7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036667063s
Jan 30 14:21:04.909: INFO: Pod "downwardapi-volume-26b2e1ed-2fcb-44b7-826e-4ef246809c7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075340593s
Jan 30 14:21:06.950: INFO: Pod "downwardapi-volume-26b2e1ed-2fcb-44b7-826e-4ef246809c7a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116040279s
Jan 30 14:21:08.960: INFO: Pod "downwardapi-volume-26b2e1ed-2fcb-44b7-826e-4ef246809c7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.126384377s
STEP: Saw pod success
Jan 30 14:21:08.960: INFO: Pod "downwardapi-volume-26b2e1ed-2fcb-44b7-826e-4ef246809c7a" satisfied condition "success or failure"
Jan 30 14:21:08.965: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-26b2e1ed-2fcb-44b7-826e-4ef246809c7a container client-container: 
STEP: delete the pod
Jan 30 14:21:09.052: INFO: Waiting for pod downwardapi-volume-26b2e1ed-2fcb-44b7-826e-4ef246809c7a to disappear
Jan 30 14:21:09.084: INFO: Pod downwardapi-volume-26b2e1ed-2fcb-44b7-826e-4ef246809c7a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:21:09.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7327" for this suite.
Jan 30 14:21:15.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:21:15.285: INFO: namespace projected-7327 deletion completed in 6.194307035s

• [SLOW TEST:14.590 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:21:15.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan 30 14:21:15.427: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jan 30 14:21:16.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 14:21:18.581: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 14:21:20.587: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 14:21:22.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 14:21:24.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715990876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 14:21:27.573: INFO: Waited 947.89168ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:21:28.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5679" for this suite.
Jan 30 14:21:34.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:21:34.313: INFO: namespace aggregator-5679 deletion completed in 6.296801318s

• [SLOW TEST:19.027 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:21:34.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jan 30 14:21:34.538: INFO: Waiting up to 5m0s for pod "client-containers-802e37ea-1634-49f3-91a3-819521e097d2" in namespace "containers-6567" to be "success or failure"
Jan 30 14:21:34.547: INFO: Pod "client-containers-802e37ea-1634-49f3-91a3-819521e097d2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.069393ms
Jan 30 14:21:36.566: INFO: Pod "client-containers-802e37ea-1634-49f3-91a3-819521e097d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02741384s
Jan 30 14:21:38.591: INFO: Pod "client-containers-802e37ea-1634-49f3-91a3-819521e097d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052823011s
Jan 30 14:21:40.630: INFO: Pod "client-containers-802e37ea-1634-49f3-91a3-819521e097d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091407022s
Jan 30 14:21:42.659: INFO: Pod "client-containers-802e37ea-1634-49f3-91a3-819521e097d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.12030998s
STEP: Saw pod success
Jan 30 14:21:42.660: INFO: Pod "client-containers-802e37ea-1634-49f3-91a3-819521e097d2" satisfied condition "success or failure"
Jan 30 14:21:42.672: INFO: Trying to get logs from node iruya-node pod client-containers-802e37ea-1634-49f3-91a3-819521e097d2 container test-container: 
STEP: delete the pod
Jan 30 14:21:42.879: INFO: Waiting for pod client-containers-802e37ea-1634-49f3-91a3-819521e097d2 to disappear
Jan 30 14:21:42.920: INFO: Pod client-containers-802e37ea-1634-49f3-91a3-819521e097d2 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:21:42.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6567" for this suite.
Jan 30 14:21:48.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:21:49.059: INFO: namespace containers-6567 deletion completed in 6.13438449s

• [SLOW TEST:14.746 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:21:49.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-czlvz in namespace proxy-8084
I0130 14:21:49.217428       8 runners.go:180] Created replication controller with name: proxy-service-czlvz, namespace: proxy-8084, replica count: 1
I0130 14:21:50.268923       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 14:21:51.269364       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 14:21:52.270277       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 14:21:53.271612       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 14:21:54.272584       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 14:21:55.273099       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 14:21:56.273914       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 14:21:57.274949       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0130 14:21:58.275880       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0130 14:21:59.276728       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0130 14:22:00.277180       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0130 14:22:01.277625       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0130 14:22:02.278099       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0130 14:22:03.278594       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0130 14:22:04.279372       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0130 14:22:05.279851       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0130 14:22:06.280236       8 runners.go:180] proxy-service-czlvz Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 30 14:22:06.315: INFO: setup took 17.210356097s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 30 14:22:06.375: INFO: (0) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname2/proxy/: bar (200; 56.038689ms)
Jan 30 14:22:06.375: INFO: (0) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:160/proxy/: foo (200; 56.310657ms)
Jan 30 14:22:06.375: INFO: (0) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:1080/proxy/: test<... (200; 57.279827ms)
Jan 30 14:22:06.375: INFO: (0) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname1/proxy/: foo (200; 57.638532ms)
Jan 30 14:22:06.378: INFO: (0) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr/proxy/: test (200; 60.475188ms)
Jan 30 14:22:06.378: INFO: (0) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:160/proxy/: foo (200; 59.512405ms)
Jan 30 14:22:06.378: INFO: (0) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:1080/proxy/: ... (200; 62.259482ms)
Jan 30 14:22:06.378: INFO: (0) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname2/proxy/: bar (200; 59.714959ms)
Jan 30 14:22:06.378: INFO: (0) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname1/proxy/: foo (200; 60.537063ms)
Jan 30 14:22:06.387: INFO: (0) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:162/proxy/: bar (200; 68.687473ms)
Jan 30 14:22:06.387: INFO: (0) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:162/proxy/: bar (200; 69.332131ms)
Jan 30 14:22:06.396: INFO: (0) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:462/proxy/: tls qux (200; 78.074515ms)
Jan 30 14:22:06.396: INFO: (0) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: test<... (200; 43.960334ms)
Jan 30 14:22:06.444: INFO: (1) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname2/proxy/: bar (200; 43.99754ms)
Jan 30 14:22:06.444: INFO: (1) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname1/proxy/: foo (200; 44.083532ms)
Jan 30 14:22:06.444: INFO: (1) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname2/proxy/: bar (200; 43.860854ms)
Jan 30 14:22:06.444: INFO: (1) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname1/proxy/: foo (200; 43.906356ms)
Jan 30 14:22:06.444: INFO: (1) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:1080/proxy/: ... (200; 44.166616ms)
Jan 30 14:22:06.444: INFO: (1) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:162/proxy/: bar (200; 44.482235ms)
Jan 30 14:22:06.446: INFO: (1) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: test (200; 46.327443ms)
Jan 30 14:22:06.446: INFO: (1) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:462/proxy/: tls qux (200; 46.530546ms)
Jan 30 14:22:06.459: INFO: (2) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:1080/proxy/: test<... (200; 12.083194ms)
Jan 30 14:22:06.459: INFO: (2) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:160/proxy/: foo (200; 12.327371ms)
Jan 30 14:22:06.459: INFO: (2) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:1080/proxy/: ... (200; 12.509366ms)
Jan 30 14:22:06.459: INFO: (2) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:460/proxy/: tls baz (200; 12.603648ms)
Jan 30 14:22:06.460: INFO: (2) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr/proxy/: test (200; 13.430887ms)
Jan 30 14:22:06.460: INFO: (2) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:462/proxy/: tls qux (200; 13.636969ms)
Jan 30 14:22:06.460: INFO: (2) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:162/proxy/: bar (200; 13.859137ms)
Jan 30 14:22:06.460: INFO: (2) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:162/proxy/: bar (200; 13.567002ms)
Jan 30 14:22:06.460: INFO: (2) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname1/proxy/: foo (200; 13.85313ms)
Jan 30 14:22:06.461: INFO: (2) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: ... (200; 27.498007ms)
Jan 30 14:22:06.496: INFO: (3) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname1/proxy/: foo (200; 27.430495ms)
Jan 30 14:22:06.496: INFO: (3) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:1080/proxy/: test<... (200; 27.403502ms)
Jan 30 14:22:06.496: INFO: (3) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:160/proxy/: foo (200; 27.132515ms)
Jan 30 14:22:06.496: INFO: (3) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:160/proxy/: foo (200; 28.213533ms)
Jan 30 14:22:06.496: INFO: (3) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname2/proxy/: tls qux (200; 27.973049ms)
Jan 30 14:22:06.496: INFO: (3) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:162/proxy/: bar (200; 28.199887ms)
Jan 30 14:22:06.496: INFO: (3) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:462/proxy/: tls qux (200; 28.155508ms)
Jan 30 14:22:06.497: INFO: (3) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: test (200; 28.9203ms)
Jan 30 14:22:06.497: INFO: (3) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:460/proxy/: tls baz (200; 28.819117ms)
Jan 30 14:22:06.518: INFO: (4) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:160/proxy/: foo (200; 21.05336ms)
Jan 30 14:22:06.518: INFO: (4) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr/proxy/: test (200; 21.052798ms)
Jan 30 14:22:06.520: INFO: (4) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: test<... (200; 22.861046ms)
Jan 30 14:22:06.521: INFO: (4) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:160/proxy/: foo (200; 23.547832ms)
Jan 30 14:22:06.521: INFO: (4) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:462/proxy/: tls qux (200; 23.699046ms)
Jan 30 14:22:06.521: INFO: (4) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:460/proxy/: tls baz (200; 23.603331ms)
Jan 30 14:22:06.521: INFO: (4) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:162/proxy/: bar (200; 23.644946ms)
Jan 30 14:22:06.521: INFO: (4) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:1080/proxy/: ... (200; 24.020101ms)
Jan 30 14:22:06.522: INFO: (4) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname2/proxy/: bar (200; 24.146484ms)
Jan 30 14:22:06.522: INFO: (4) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname1/proxy/: tls baz (200; 24.51923ms)
Jan 30 14:22:06.525: INFO: (4) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname1/proxy/: foo (200; 27.723975ms)
Jan 30 14:22:06.525: INFO: (4) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname2/proxy/: bar (200; 28.043884ms)
Jan 30 14:22:06.526: INFO: (4) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname1/proxy/: foo (200; 28.419491ms)
Jan 30 14:22:06.526: INFO: (4) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname2/proxy/: tls qux (200; 29.144671ms)
Jan 30 14:22:06.547: INFO: (5) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr/proxy/: test (200; 20.543535ms)
Jan 30 14:22:06.548: INFO: (5) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: test<... (200; 21.301502ms)
Jan 30 14:22:06.548: INFO: (5) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:162/proxy/: bar (200; 21.572728ms)
Jan 30 14:22:06.549: INFO: (5) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:160/proxy/: foo (200; 22.381246ms)
Jan 30 14:22:06.549: INFO: (5) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:160/proxy/: foo (200; 22.339778ms)
Jan 30 14:22:06.549: INFO: (5) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:162/proxy/: bar (200; 22.363678ms)
Jan 30 14:22:06.549: INFO: (5) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:1080/proxy/: ... (200; 22.621816ms)
Jan 30 14:22:06.552: INFO: (5) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname1/proxy/: tls baz (200; 24.983347ms)
Jan 30 14:22:06.552: INFO: (5) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname2/proxy/: bar (200; 25.396266ms)
Jan 30 14:22:06.552: INFO: (5) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname2/proxy/: tls qux (200; 25.151932ms)
Jan 30 14:22:06.552: INFO: (5) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname1/proxy/: foo (200; 25.73032ms)
Jan 30 14:22:06.553: INFO: (5) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname1/proxy/: foo (200; 26.167921ms)
Jan 30 14:22:06.553: INFO: (5) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname2/proxy/: bar (200; 26.886952ms)
Jan 30 14:22:06.571: INFO: (6) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: ... (200; 18.97262ms)
Jan 30 14:22:06.573: INFO: (6) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:162/proxy/: bar (200; 18.935845ms)
Jan 30 14:22:06.573: INFO: (6) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname1/proxy/: tls baz (200; 19.012442ms)
Jan 30 14:22:06.573: INFO: (6) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr/proxy/: test (200; 19.31092ms)
Jan 30 14:22:06.573: INFO: (6) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname1/proxy/: foo (200; 19.539113ms)
Jan 30 14:22:06.573: INFO: (6) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:1080/proxy/: test<... (200; 19.576454ms)
Jan 30 14:22:06.573: INFO: (6) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:162/proxy/: bar (200; 19.728804ms)
Jan 30 14:22:06.574: INFO: (6) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:160/proxy/: foo (200; 19.874772ms)
Jan 30 14:22:06.574: INFO: (6) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname2/proxy/: bar (200; 20.017819ms)
Jan 30 14:22:06.574: INFO: (6) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:460/proxy/: tls baz (200; 20.47188ms)
Jan 30 14:22:06.574: INFO: (6) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:462/proxy/: tls qux (200; 20.912389ms)
Jan 30 14:22:06.576: INFO: (6) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname2/proxy/: tls qux (200; 21.994989ms)
Jan 30 14:22:06.591: INFO: (7) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:162/proxy/: bar (200; 14.531299ms)
Jan 30 14:22:06.591: INFO: (7) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:460/proxy/: tls baz (200; 14.445465ms)
Jan 30 14:22:06.591: INFO: (7) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr/proxy/: test (200; 14.705157ms)
Jan 30 14:22:06.591: INFO: (7) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:160/proxy/: foo (200; 14.392678ms)
Jan 30 14:22:06.591: INFO: (7) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:1080/proxy/: test<... (200; 14.836621ms)
Jan 30 14:22:06.591: INFO: (7) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:160/proxy/: foo (200; 15.023666ms)
Jan 30 14:22:06.593: INFO: (7) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:462/proxy/: tls qux (200; 16.910774ms)
Jan 30 14:22:06.593: INFO: (7) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:1080/proxy/: ... (200; 16.759934ms)
Jan 30 14:22:06.593: INFO: (7) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:162/proxy/: bar (200; 16.94292ms)
Jan 30 14:22:06.593: INFO: (7) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname1/proxy/: tls baz (200; 16.878641ms)
Jan 30 14:22:06.593: INFO: (7) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname1/proxy/: foo (200; 16.932903ms)
Jan 30 14:22:06.593: INFO: (7) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname2/proxy/: bar (200; 16.974696ms)
Jan 30 14:22:06.593: INFO: (7) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: ... (200; 9.836869ms)
Jan 30 14:22:06.605: INFO: (8) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:160/proxy/: foo (200; 10.284772ms)
Jan 30 14:22:06.605: INFO: (8) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: test (200; 9.878156ms)
Jan 30 14:22:06.606: INFO: (8) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:1080/proxy/: test<... (200; 10.412829ms)
Jan 30 14:22:06.606: INFO: (8) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:460/proxy/: tls baz (200; 10.524302ms)
Jan 30 14:22:06.606: INFO: (8) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname1/proxy/: foo (200; 10.996159ms)
Jan 30 14:22:06.607: INFO: (8) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:162/proxy/: bar (200; 11.286819ms)
Jan 30 14:22:06.611: INFO: (8) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname2/proxy/: bar (200; 15.839639ms)
Jan 30 14:22:06.611: INFO: (8) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname2/proxy/: bar (200; 16.481523ms)
Jan 30 14:22:06.611: INFO: (8) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname1/proxy/: tls baz (200; 16.159174ms)
Jan 30 14:22:06.611: INFO: (8) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname2/proxy/: tls qux (200; 16.313631ms)
Jan 30 14:22:06.615: INFO: (8) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname1/proxy/: foo (200; 19.472313ms)
Jan 30 14:22:06.625: INFO: (9) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:1080/proxy/: test<... (200; 10.484913ms)
Jan 30 14:22:06.626: INFO: (9) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:162/proxy/: bar (200; 10.714467ms)
Jan 30 14:22:06.627: INFO: (9) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:162/proxy/: bar (200; 12.565935ms)
Jan 30 14:22:06.628: INFO: (9) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr/proxy/: test (200; 13.241333ms)
Jan 30 14:22:06.628: INFO: (9) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname2/proxy/: bar (200; 13.376827ms)
Jan 30 14:22:06.628: INFO: (9) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:160/proxy/: foo (200; 13.232733ms)
Jan 30 14:22:06.628: INFO: (9) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:1080/proxy/: ... (200; 13.269074ms)
Jan 30 14:22:06.628: INFO: (9) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: test (200; 12.005042ms)
Jan 30 14:22:06.647: INFO: (10) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:160/proxy/: foo (200; 11.864491ms)
Jan 30 14:22:06.648: INFO: (10) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:162/proxy/: bar (200; 12.947056ms)
Jan 30 14:22:06.648: INFO: (10) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: ... (200; 14.226473ms)
Jan 30 14:22:06.649: INFO: (10) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:1080/proxy/: test<... (200; 14.106321ms)
Jan 30 14:22:06.649: INFO: (10) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:162/proxy/: bar (200; 14.122127ms)
Jan 30 14:22:06.653: INFO: (10) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname1/proxy/: tls baz (200; 17.224624ms)
Jan 30 14:22:06.654: INFO: (10) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname2/proxy/: bar (200; 18.889132ms)
Jan 30 14:22:06.654: INFO: (10) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname1/proxy/: foo (200; 18.769608ms)
Jan 30 14:22:06.655: INFO: (10) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname2/proxy/: tls qux (200; 20.197098ms)
Jan 30 14:22:06.658: INFO: (10) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname1/proxy/: foo (200; 22.736547ms)
Jan 30 14:22:06.659: INFO: (10) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname2/proxy/: bar (200; 24.42854ms)
Jan 30 14:22:06.678: INFO: (11) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:1080/proxy/: test<... (200; 18.986135ms)
Jan 30 14:22:06.678: INFO: (11) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname1/proxy/: foo (200; 19.208014ms)
Jan 30 14:22:06.678: INFO: (11) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname2/proxy/: bar (200; 18.941289ms)
Jan 30 14:22:06.678: INFO: (11) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:160/proxy/: foo (200; 19.252001ms)
Jan 30 14:22:06.678: INFO: (11) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname2/proxy/: bar (200; 19.041456ms)
Jan 30 14:22:06.679: INFO: (11) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname1/proxy/: tls baz (200; 20.171619ms)
Jan 30 14:22:06.680: INFO: (11) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:460/proxy/: tls baz (200; 20.944251ms)
Jan 30 14:22:06.681: INFO: (11) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:160/proxy/: foo (200; 21.544758ms)
Jan 30 14:22:06.681: INFO: (11) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr/proxy/: test (200; 21.739653ms)
Jan 30 14:22:06.681: INFO: (11) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:162/proxy/: bar (200; 21.853875ms)
Jan 30 14:22:06.681: INFO: (11) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:162/proxy/: bar (200; 21.718967ms)
Jan 30 14:22:06.681: INFO: (11) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname1/proxy/: foo (200; 21.759842ms)
Jan 30 14:22:06.681: INFO: (11) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:1080/proxy/: ... (200; 21.738945ms)
Jan 30 14:22:06.681: INFO: (11) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: test (200; 6.592334ms)
Jan 30 14:22:06.690: INFO: (12) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:460/proxy/: tls baz (200; 6.870945ms)
Jan 30 14:22:06.690: INFO: (12) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: test<... (200; 8.705723ms)
Jan 30 14:22:06.692: INFO: (12) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:1080/proxy/: ... (200; 9.101403ms)
Jan 30 14:22:06.696: INFO: (12) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname1/proxy/: foo (200; 13.090309ms)
Jan 30 14:22:06.696: INFO: (12) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname2/proxy/: bar (200; 13.101811ms)
Jan 30 14:22:06.697: INFO: (12) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname1/proxy/: tls baz (200; 13.567721ms)
Jan 30 14:22:06.697: INFO: (12) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname2/proxy/: tls qux (200; 13.830079ms)
Jan 30 14:22:06.698: INFO: (12) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname2/proxy/: bar (200; 14.930289ms)
Jan 30 14:22:06.703: INFO: (12) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname1/proxy/: foo (200; 19.433891ms)
Jan 30 14:22:06.713: INFO: (13) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:460/proxy/: tls baz (200; 9.973231ms)
Jan 30 14:22:06.713: INFO: (13) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname1/proxy/: foo (200; 10.405698ms)
Jan 30 14:22:06.713: INFO: (13) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname2/proxy/: tls qux (200; 10.388661ms)
Jan 30 14:22:06.713: INFO: (13) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:462/proxy/: tls qux (200; 10.764856ms)
Jan 30 14:22:06.714: INFO: (13) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:1080/proxy/: ... (200; 11.200843ms)
Jan 30 14:22:06.714: INFO: (13) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname1/proxy/: tls baz (200; 11.653919ms)
Jan 30 14:22:06.715: INFO: (13) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:160/proxy/: foo (200; 12.020948ms)
Jan 30 14:22:06.715: INFO: (13) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: test<... (200; 12.51381ms)
Jan 30 14:22:06.715: INFO: (13) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr/proxy/: test (200; 12.405426ms)
Jan 30 14:22:06.715: INFO: (13) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:162/proxy/: bar (200; 12.507453ms)
Jan 30 14:22:06.715: INFO: (13) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:160/proxy/: foo (200; 12.503725ms)
Jan 30 14:22:06.715: INFO: (13) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:162/proxy/: bar (200; 12.72083ms)
Jan 30 14:22:06.716: INFO: (13) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname2/proxy/: bar (200; 13.634936ms)
Jan 30 14:22:06.717: INFO: (13) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname1/proxy/: foo (200; 13.908954ms)
Jan 30 14:22:06.724: INFO: (14) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:160/proxy/: foo (200; 6.965626ms)
Jan 30 14:22:06.724: INFO: (14) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr/proxy/: test (200; 6.990495ms)
Jan 30 14:22:06.724: INFO: (14) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:162/proxy/: bar (200; 7.133578ms)
Jan 30 14:22:06.724: INFO: (14) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:1080/proxy/: test<... (200; 7.221294ms)
Jan 30 14:22:06.724: INFO: (14) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:460/proxy/: tls baz (200; 7.592369ms)
Jan 30 14:22:06.725: INFO: (14) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:1080/proxy/: ... (200; 7.851114ms)
Jan 30 14:22:06.725: INFO: (14) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: test<... (200; 5.479515ms)
Jan 30 14:22:06.817: INFO: (15) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname2/proxy/: bar (200; 86.463831ms)
Jan 30 14:22:06.817: INFO: (15) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:162/proxy/: bar (200; 86.44284ms)
Jan 30 14:22:06.818: INFO: (15) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:160/proxy/: foo (200; 86.741214ms)
Jan 30 14:22:06.818: INFO: (15) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr/proxy/: test (200; 87.240918ms)
Jan 30 14:22:06.818: INFO: (15) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:1080/proxy/: ... (200; 87.455789ms)
Jan 30 14:22:06.819: INFO: (15) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:160/proxy/: foo (200; 87.767906ms)
Jan 30 14:22:06.819: INFO: (15) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: ... (200; 11.191283ms)
Jan 30 14:22:06.840: INFO: (16) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:1080/proxy/: test<... (200; 13.020183ms)
Jan 30 14:22:06.840: INFO: (16) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr/proxy/: test (200; 13.350745ms)
Jan 30 14:22:06.841: INFO: (16) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:160/proxy/: foo (200; 13.190287ms)
Jan 30 14:22:06.841: INFO: (16) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:460/proxy/: tls baz (200; 13.425951ms)
Jan 30 14:22:06.841: INFO: (16) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:162/proxy/: bar (200; 14.082325ms)
Jan 30 14:22:06.841: INFO: (16) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:162/proxy/: bar (200; 14.10824ms)
Jan 30 14:22:06.841: INFO: (16) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:462/proxy/: tls qux (200; 14.002558ms)
Jan 30 14:22:06.841: INFO: (16) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: ... (200; 11.861945ms)
Jan 30 14:22:06.858: INFO: (17) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:160/proxy/: foo (200; 12.246704ms)
Jan 30 14:22:06.858: INFO: (17) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:462/proxy/: tls qux (200; 12.436357ms)
Jan 30 14:22:06.858: INFO: (17) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr/proxy/: test (200; 12.557199ms)
Jan 30 14:22:06.859: INFO: (17) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname2/proxy/: bar (200; 12.820346ms)
Jan 30 14:22:06.859: INFO: (17) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname1/proxy/: foo (200; 13.041904ms)
Jan 30 14:22:06.859: INFO: (17) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:1080/proxy/: test<... (200; 13.358276ms)
Jan 30 14:22:06.860: INFO: (17) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname2/proxy/: bar (200; 13.927213ms)
Jan 30 14:22:06.860: INFO: (17) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname1/proxy/: foo (200; 13.737812ms)
Jan 30 14:22:06.860: INFO: (17) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:162/proxy/: bar (200; 13.654543ms)
Jan 30 14:22:06.860: INFO: (17) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname1/proxy/: tls baz (200; 14.23387ms)
Jan 30 14:22:06.861: INFO: (17) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:160/proxy/: foo (200; 15.030524ms)
Jan 30 14:22:06.862: INFO: (17) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname2/proxy/: tls qux (200; 15.698308ms)
Jan 30 14:22:06.873: INFO: (18) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: ... (200; 12.71408ms)
Jan 30 14:22:06.875: INFO: (18) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:460/proxy/: tls baz (200; 12.844883ms)
Jan 30 14:22:06.876: INFO: (18) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:1080/proxy/: test<... (200; 14.311953ms)
Jan 30 14:22:06.876: INFO: (18) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:162/proxy/: bar (200; 14.688991ms)
Jan 30 14:22:06.876: INFO: (18) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr/proxy/: test (200; 14.781317ms)
Jan 30 14:22:06.877: INFO: (18) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:462/proxy/: tls qux (200; 14.739544ms)
Jan 30 14:22:06.878: INFO: (18) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:162/proxy/: bar (200; 16.030478ms)
Jan 30 14:22:06.879: INFO: (18) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname1/proxy/: foo (200; 17.623096ms)
Jan 30 14:22:06.880: INFO: (18) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname2/proxy/: tls qux (200; 18.552914ms)
Jan 30 14:22:06.883: INFO: (18) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname2/proxy/: bar (200; 20.771304ms)
Jan 30 14:22:06.883: INFO: (18) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname1/proxy/: foo (200; 20.880655ms)
Jan 30 14:22:06.883: INFO: (18) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname1/proxy/: tls baz (200; 20.794876ms)
Jan 30 14:22:06.883: INFO: (18) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname2/proxy/: bar (200; 20.964676ms)
Jan 30 14:22:06.889: INFO: (19) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:160/proxy/: foo (200; 6.729535ms)
Jan 30 14:22:06.890: INFO: (19) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:160/proxy/: foo (200; 6.741809ms)
Jan 30 14:22:06.890: INFO: (19) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:460/proxy/: tls baz (200; 7.252205ms)
Jan 30 14:22:06.891: INFO: (19) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr:162/proxy/: bar (200; 8.630495ms)
Jan 30 14:22:06.891: INFO: (19) /api/v1/namespaces/proxy-8084/pods/proxy-service-czlvz-svlfr/proxy/: test (200; 8.604742ms)
Jan 30 14:22:06.891: INFO: (19) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:462/proxy/: tls qux (200; 8.729722ms)
Jan 30 14:22:06.892: INFO: (19) /api/v1/namespaces/proxy-8084/pods/http:proxy-service-czlvz-svlfr:1080/proxy/: ... (200; 8.7298ms)
Jan 30 14:22:06.892: INFO: (19) /api/v1/namespaces/proxy-8084/pods/https:proxy-service-czlvz-svlfr:443/proxy/: test<... (200; 9.504679ms)
Jan 30 14:22:06.896: INFO: (19) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname2/proxy/: bar (200; 12.918274ms)
Jan 30 14:22:06.896: INFO: (19) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname1/proxy/: foo (200; 13.404841ms)
Jan 30 14:22:06.899: INFO: (19) /api/v1/namespaces/proxy-8084/services/http:proxy-service-czlvz:portname2/proxy/: bar (200; 16.127926ms)
Jan 30 14:22:06.899: INFO: (19) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname1/proxy/: tls baz (200; 16.246487ms)
Jan 30 14:22:06.899: INFO: (19) /api/v1/namespaces/proxy-8084/services/https:proxy-service-czlvz:tlsportname2/proxy/: tls qux (200; 16.352821ms)
Jan 30 14:22:06.899: INFO: (19) /api/v1/namespaces/proxy-8084/services/proxy-service-czlvz:portname1/proxy/: foo (200; 16.364416ms)
STEP: deleting ReplicationController proxy-service-czlvz in namespace proxy-8084, will wait for the garbage collector to delete the pods
Jan 30 14:22:06.963: INFO: Deleting ReplicationController proxy-service-czlvz took: 10.986299ms
Jan 30 14:22:07.265: INFO: Terminating ReplicationController proxy-service-czlvz pods took: 301.3832ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:22:16.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8084" for this suite.
Jan 30 14:22:22.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:22:22.746: INFO: namespace proxy-8084 deletion completed in 6.138873261s

• [SLOW TEST:33.686 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:22:22.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 30 14:22:31.451: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1848 pod-service-account-d4f294dd-f1bb-4094-976f-22aa19b242ca -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 30 14:22:32.003: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1848 pod-service-account-d4f294dd-f1bb-4094-976f-22aa19b242ca -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 30 14:22:32.627: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1848 pod-service-account-d4f294dd-f1bb-4094-976f-22aa19b242ca -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:22:33.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1848" for this suite.
Jan 30 14:22:39.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:22:39.219: INFO: namespace svcaccounts-1848 deletion completed in 6.146613095s

• [SLOW TEST:16.472 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:22:39.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-4bfc22de-11be-4df8-b657-9bb5c9e0d945 in namespace container-probe-3921
Jan 30 14:22:49.352: INFO: Started pod busybox-4bfc22de-11be-4df8-b657-9bb5c9e0d945 in namespace container-probe-3921
STEP: checking the pod's current state and verifying that restartCount is present
Jan 30 14:22:49.358: INFO: Initial restart count of pod busybox-4bfc22de-11be-4df8-b657-9bb5c9e0d945 is 0
Jan 30 14:23:40.200: INFO: Restart count of pod container-probe-3921/busybox-4bfc22de-11be-4df8-b657-9bb5c9e0d945 is now 1 (50.841930209s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:23:40.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3921" for this suite.
Jan 30 14:23:46.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:23:46.488: INFO: namespace container-probe-3921 deletion completed in 6.247263535s

• [SLOW TEST:67.269 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:23:46.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 30 14:23:46.569: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 30 14:23:46.582: INFO: Waiting for terminating namespaces to be deleted...
Jan 30 14:23:46.616: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 30 14:23:46.630: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 30 14:23:46.631: INFO: 	Container weave ready: true, restart count 0
Jan 30 14:23:46.631: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 14:23:46.631: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 30 14:23:46.631: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 14:23:46.631: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 30 14:23:46.646: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 30 14:23:46.647: INFO: 	Container etcd ready: true, restart count 0
Jan 30 14:23:46.647: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 30 14:23:46.647: INFO: 	Container weave ready: true, restart count 0
Jan 30 14:23:46.647: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 14:23:46.647: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 30 14:23:46.647: INFO: 	Container coredns ready: true, restart count 0
Jan 30 14:23:46.647: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 30 14:23:46.647: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 30 14:23:46.647: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 30 14:23:46.647: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 14:23:46.647: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 30 14:23:46.647: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 30 14:23:46.647: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 30 14:23:46.647: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 30 14:23:46.647: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 30 14:23:46.647: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15eeb02d05dcf202], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:23:47.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3957" for this suite.
Jan 30 14:23:53.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:23:53.886: INFO: namespace sched-pred-3957 deletion completed in 6.19395332s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.397 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:23:53.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0130 14:24:08.840005       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 30 14:24:08.840: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:24:08.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3147" for this suite.
Jan 30 14:24:21.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:24:23.310: INFO: namespace gc-3147 deletion completed in 14.078743341s

• [SLOW TEST:29.425 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:24:23.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 30 14:24:48.002: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:24:48.019: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 14:24:50.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:24:50.027: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 14:24:52.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:24:52.036: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 14:24:54.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:24:54.027: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 14:24:56.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:24:56.030: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 14:24:58.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:24:58.025: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 14:25:00.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:25:00.101: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 14:25:02.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:25:02.029: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 14:25:04.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:25:04.030: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 14:25:06.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:25:06.031: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 14:25:08.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:25:08.031: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 14:25:10.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:25:10.030: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 14:25:12.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:25:12.033: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 14:25:14.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:25:14.033: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 14:25:16.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:25:16.029: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 14:25:18.020: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 14:25:18.025: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:25:18.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8778" for this suite.
Jan 30 14:25:42.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:25:42.228: INFO: namespace container-lifecycle-hook-8778 deletion completed in 24.157422009s

• [SLOW TEST:78.915 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:25:42.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2595
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan 30 14:25:42.456: INFO: Found 0 stateful pods, waiting for 3
Jan 30 14:25:52.746: INFO: Found 2 stateful pods, waiting for 3
Jan 30 14:26:02.471: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 14:26:02.471: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 14:26:02.471: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 30 14:26:12.471: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 14:26:12.471: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 14:26:12.471: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 30 14:26:12.517: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 30 14:26:22.583: INFO: Updating stateful set ss2
Jan 30 14:26:22.650: INFO: Waiting for Pod statefulset-2595/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan 30 14:26:33.369: INFO: Found 2 stateful pods, waiting for 3
Jan 30 14:26:43.386: INFO: Found 2 stateful pods, waiting for 3
Jan 30 14:26:53.386: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 14:26:53.386: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 14:26:53.386: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 30 14:27:03.379: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 14:27:03.379: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 14:27:03.379: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 30 14:27:03.417: INFO: Updating stateful set ss2
Jan 30 14:27:03.431: INFO: Waiting for Pod statefulset-2595/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 30 14:27:13.452: INFO: Waiting for Pod statefulset-2595/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 30 14:27:23.714: INFO: Updating stateful set ss2
Jan 30 14:27:24.312: INFO: Waiting for StatefulSet statefulset-2595/ss2 to complete update
Jan 30 14:27:24.312: INFO: Waiting for Pod statefulset-2595/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 30 14:27:34.328: INFO: Waiting for StatefulSet statefulset-2595/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 30 14:27:44.328: INFO: Deleting all statefulset in ns statefulset-2595
Jan 30 14:27:44.332: INFO: Scaling statefulset ss2 to 0
Jan 30 14:28:24.371: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 14:28:24.378: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:28:24.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2595" for this suite.
Jan 30 14:28:34.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:28:34.812: INFO: namespace statefulset-2595 deletion completed in 10.346889278s

• [SLOW TEST:172.583 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:28:34.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-5386
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5386 to expose endpoints map[]
Jan 30 14:28:34.967: INFO: Get endpoints failed (7.936711ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 30 14:28:35.979: INFO: successfully validated that service multi-endpoint-test in namespace services-5386 exposes endpoints map[] (1.019924248s elapsed)
STEP: Creating pod pod1 in namespace services-5386
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5386 to expose endpoints map[pod1:[100]]
Jan 30 14:28:40.093: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.093798425s elapsed, will retry)
Jan 30 14:28:45.151: INFO: successfully validated that service multi-endpoint-test in namespace services-5386 exposes endpoints map[pod1:[100]] (9.151827101s elapsed)
STEP: Creating pod pod2 in namespace services-5386
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5386 to expose endpoints map[pod1:[100] pod2:[101]]
Jan 30 14:28:49.358: INFO: Unexpected endpoints: found map[1b8b2c9f-438d-45ff-a474-a186fa3295e1:[100]], expected map[pod1:[100] pod2:[101]] (4.19398531s elapsed, will retry)
Jan 30 14:28:53.702: INFO: successfully validated that service multi-endpoint-test in namespace services-5386 exposes endpoints map[pod1:[100] pod2:[101]] (8.53757027s elapsed)
STEP: Deleting pod pod1 in namespace services-5386
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5386 to expose endpoints map[pod2:[101]]
Jan 30 14:28:54.756: INFO: successfully validated that service multi-endpoint-test in namespace services-5386 exposes endpoints map[pod2:[101]] (1.042920736s elapsed)
STEP: Deleting pod pod2 in namespace services-5386
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5386 to expose endpoints map[]
Jan 30 14:28:56.902: INFO: successfully validated that service multi-endpoint-test in namespace services-5386 exposes endpoints map[] (2.122421963s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:28:57.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5386" for this suite.
Jan 30 14:29:19.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:29:19.714: INFO: namespace services-5386 deletion completed in 22.206694038s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:44.901 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:29:19.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan 30 14:29:19.828: INFO: namespace kubectl-7026
Jan 30 14:29:19.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7026'
Jan 30 14:29:22.434: INFO: stderr: ""
Jan 30 14:29:22.435: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 30 14:29:23.505: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:29:23.505: INFO: Found 0 / 1
Jan 30 14:29:24.448: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:29:24.448: INFO: Found 0 / 1
Jan 30 14:29:25.444: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:29:25.445: INFO: Found 0 / 1
Jan 30 14:29:26.445: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:29:26.445: INFO: Found 0 / 1
Jan 30 14:29:27.451: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:29:27.451: INFO: Found 0 / 1
Jan 30 14:29:28.445: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:29:28.445: INFO: Found 0 / 1
Jan 30 14:29:29.442: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:29:29.442: INFO: Found 0 / 1
Jan 30 14:29:30.490: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:29:30.490: INFO: Found 0 / 1
Jan 30 14:29:31.446: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:29:31.447: INFO: Found 1 / 1
Jan 30 14:29:31.447: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 30 14:29:31.453: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:29:31.453: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 30 14:29:31.453: INFO: wait on redis-master startup in kubectl-7026 
Jan 30 14:29:31.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4gtbv redis-master --namespace=kubectl-7026'
Jan 30 14:29:31.654: INFO: stderr: ""
Jan 30 14:29:31.654: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 30 Jan 14:29:29.358 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Jan 14:29:29.359 # Server started, Redis version 3.2.12\n1:M 30 Jan 14:29:29.359 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Jan 14:29:29.359 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 30 14:29:31.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7026'
Jan 30 14:29:31.910: INFO: stderr: ""
Jan 30 14:29:31.910: INFO: stdout: "service/rm2 exposed\n"
Jan 30 14:29:31.949: INFO: Service rm2 in namespace kubectl-7026 found.
STEP: exposing service
Jan 30 14:29:33.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7026'
Jan 30 14:29:34.307: INFO: stderr: ""
Jan 30 14:29:34.307: INFO: stdout: "service/rm3 exposed\n"
Jan 30 14:29:34.350: INFO: Service rm3 in namespace kubectl-7026 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:29:36.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7026" for this suite.
Jan 30 14:29:58.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:29:58.725: INFO: namespace kubectl-7026 deletion completed in 22.350222982s

• [SLOW TEST:39.010 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:29:58.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Jan 30 14:29:58.818: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix440173645/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:29:58.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7533" for this suite.
Jan 30 14:30:04.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:30:05.069: INFO: namespace kubectl-7533 deletion completed in 6.15666513s

• [SLOW TEST:6.343 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:30:05.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 30 14:30:05.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7947'
Jan 30 14:30:05.327: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 30 14:30:05.327: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jan 30 14:30:05.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7947'
Jan 30 14:30:05.600: INFO: stderr: ""
Jan 30 14:30:05.600: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:30:05.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7947" for this suite.
Jan 30 14:30:11.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:30:11.778: INFO: namespace kubectl-7947 deletion completed in 6.172400808s

• [SLOW TEST:6.707 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:30:11.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:30:12.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3262" for this suite.
Jan 30 14:30:18.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:30:18.283: INFO: namespace kubelet-test-3262 deletion completed in 6.201671852s

• [SLOW TEST:6.504 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:30:18.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 30 14:30:18.363: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:30:19.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6041" for this suite.
Jan 30 14:30:25.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:30:25.706: INFO: namespace custom-resource-definition-6041 deletion completed in 6.169691551s

• [SLOW TEST:7.422 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:30:25.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 30 14:30:25.811: INFO: Creating ReplicaSet my-hostname-basic-b2ab17e6-e115-4723-8a35-7024cb2c5046
Jan 30 14:30:25.839: INFO: Pod name my-hostname-basic-b2ab17e6-e115-4723-8a35-7024cb2c5046: Found 0 pods out of 1
Jan 30 14:30:30.852: INFO: Pod name my-hostname-basic-b2ab17e6-e115-4723-8a35-7024cb2c5046: Found 1 pods out of 1
Jan 30 14:30:30.852: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b2ab17e6-e115-4723-8a35-7024cb2c5046" is running
Jan 30 14:30:34.890: INFO: Pod "my-hostname-basic-b2ab17e6-e115-4723-8a35-7024cb2c5046-h8lrh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 14:30:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 14:30:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b2ab17e6-e115-4723-8a35-7024cb2c5046]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 14:30:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b2ab17e6-e115-4723-8a35-7024cb2c5046]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 14:30:25 +0000 UTC Reason: Message:}])
Jan 30 14:30:34.891: INFO: Trying to dial the pod
Jan 30 14:30:39.941: INFO: Controller my-hostname-basic-b2ab17e6-e115-4723-8a35-7024cb2c5046: Got expected result from replica 1 [my-hostname-basic-b2ab17e6-e115-4723-8a35-7024cb2c5046-h8lrh]: "my-hostname-basic-b2ab17e6-e115-4723-8a35-7024cb2c5046-h8lrh", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:30:39.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6725" for this suite.
Jan 30 14:30:45.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:30:46.106: INFO: namespace replicaset-6725 deletion completed in 6.156606349s

• [SLOW TEST:20.398 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:30:46.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:30:46.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9360" for this suite.
Jan 30 14:30:52.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:30:52.285: INFO: namespace services-9360 deletion completed in 6.097199367s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.178 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:30:52.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Jan 30 14:30:52.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-245'
Jan 30 14:30:52.806: INFO: stderr: ""
Jan 30 14:30:52.806: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Jan 30 14:30:53.817: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:30:53.817: INFO: Found 0 / 1
Jan 30 14:30:54.824: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:30:54.824: INFO: Found 0 / 1
Jan 30 14:30:55.833: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:30:55.834: INFO: Found 0 / 1
Jan 30 14:30:56.824: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:30:56.824: INFO: Found 0 / 1
Jan 30 14:30:57.822: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:30:57.823: INFO: Found 0 / 1
Jan 30 14:30:58.814: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:30:58.814: INFO: Found 0 / 1
Jan 30 14:30:59.820: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:30:59.820: INFO: Found 0 / 1
Jan 30 14:31:00.815: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:31:00.816: INFO: Found 1 / 1
Jan 30 14:31:00.816: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 30 14:31:00.823: INFO: Selector matched 1 pods for map[app:redis]
Jan 30 14:31:00.823: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan 30 14:31:00.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8z5xq redis-master --namespace=kubectl-245'
Jan 30 14:31:01.077: INFO: stderr: ""
Jan 30 14:31:01.077: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 30 Jan 14:30:59.836 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Jan 14:30:59.837 # Server started, Redis version 3.2.12\n1:M 30 Jan 14:30:59.837 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Jan 14:30:59.837 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan 30 14:31:01.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8z5xq redis-master --namespace=kubectl-245 --tail=1'
Jan 30 14:31:01.213: INFO: stderr: ""
Jan 30 14:31:01.214: INFO: stdout: "1:M 30 Jan 14:30:59.837 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan 30 14:31:01.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8z5xq redis-master --namespace=kubectl-245 --limit-bytes=1'
Jan 30 14:31:01.333: INFO: stderr: ""
Jan 30 14:31:01.333: INFO: stdout: " "
STEP: exposing timestamps
Jan 30 14:31:01.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8z5xq redis-master --namespace=kubectl-245 --tail=1 --timestamps'
Jan 30 14:31:01.513: INFO: stderr: ""
Jan 30 14:31:01.513: INFO: stdout: "2020-01-30T14:30:59.838060349Z 1:M 30 Jan 14:30:59.837 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan 30 14:31:04.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8z5xq redis-master --namespace=kubectl-245 --since=1s'
Jan 30 14:31:04.211: INFO: stderr: ""
Jan 30 14:31:04.212: INFO: stdout: ""
Jan 30 14:31:04.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8z5xq redis-master --namespace=kubectl-245 --since=24h'
Jan 30 14:31:04.409: INFO: stderr: ""
Jan 30 14:31:04.409: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 30 Jan 14:30:59.836 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 30 Jan 14:30:59.837 # Server started, Redis version 3.2.12\n1:M 30 Jan 14:30:59.837 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 30 Jan 14:30:59.837 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Jan 30 14:31:04.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-245'
Jan 30 14:31:04.684: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 30 14:31:04.685: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan 30 14:31:04.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-245'
Jan 30 14:31:04.966: INFO: stderr: "No resources found.\n"
Jan 30 14:31:04.966: INFO: stdout: ""
Jan 30 14:31:04.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-245 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 30 14:31:05.142: INFO: stderr: ""
Jan 30 14:31:05.143: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:31:05.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-245" for this suite.
Jan 30 14:31:27.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:31:27.294: INFO: namespace kubectl-245 deletion completed in 22.142393116s

• [SLOW TEST:35.009 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:31:27.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-vrqb
STEP: Creating a pod to test atomic-volume-subpath
Jan 30 14:31:27.466: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vrqb" in namespace "subpath-86" to be "success or failure"
Jan 30 14:31:27.476: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.742425ms
Jan 30 14:31:29.505: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03869737s
Jan 30 14:31:31.517: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050462578s
Jan 30 14:31:33.525: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058615807s
Jan 30 14:31:35.533: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0664741s
Jan 30 14:31:37.546: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Running", Reason="", readiness=true. Elapsed: 10.078953281s
Jan 30 14:31:39.561: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Running", Reason="", readiness=true. Elapsed: 12.094052747s
Jan 30 14:31:41.570: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Running", Reason="", readiness=true. Elapsed: 14.103319612s
Jan 30 14:31:43.584: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Running", Reason="", readiness=true. Elapsed: 16.117212407s
Jan 30 14:31:45.593: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Running", Reason="", readiness=true. Elapsed: 18.126581698s
Jan 30 14:31:47.605: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Running", Reason="", readiness=true. Elapsed: 20.138550517s
Jan 30 14:31:49.622: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Running", Reason="", readiness=true. Elapsed: 22.154825203s
Jan 30 14:31:51.631: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Running", Reason="", readiness=true. Elapsed: 24.164441943s
Jan 30 14:31:53.651: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Running", Reason="", readiness=true. Elapsed: 26.184312108s
Jan 30 14:31:55.663: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Running", Reason="", readiness=true. Elapsed: 28.196746938s
Jan 30 14:31:57.673: INFO: Pod "pod-subpath-test-projected-vrqb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.206333833s
STEP: Saw pod success
Jan 30 14:31:57.673: INFO: Pod "pod-subpath-test-projected-vrqb" satisfied condition "success or failure"
Jan 30 14:31:57.678: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-vrqb container test-container-subpath-projected-vrqb: 
STEP: delete the pod
Jan 30 14:31:57.731: INFO: Waiting for pod pod-subpath-test-projected-vrqb to disappear
Jan 30 14:31:57.860: INFO: Pod pod-subpath-test-projected-vrqb no longer exists
STEP: Deleting pod pod-subpath-test-projected-vrqb
Jan 30 14:31:57.860: INFO: Deleting pod "pod-subpath-test-projected-vrqb" in namespace "subpath-86"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:31:57.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-86" for this suite.
Jan 30 14:32:03.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:32:04.030: INFO: namespace subpath-86 deletion completed in 6.141532088s

• [SLOW TEST:36.735 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:32:04.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-d6f2af83-fa58-4e85-8f24-a8d84fcc662a
STEP: Creating a pod to test consume configMaps
Jan 30 14:32:04.282: INFO: Waiting up to 5m0s for pod "pod-configmaps-25850d96-6c36-41ba-842b-45dc18533d42" in namespace "configmap-2605" to be "success or failure"
Jan 30 14:32:04.290: INFO: Pod "pod-configmaps-25850d96-6c36-41ba-842b-45dc18533d42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127003ms
Jan 30 14:32:06.304: INFO: Pod "pod-configmaps-25850d96-6c36-41ba-842b-45dc18533d42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021805664s
Jan 30 14:32:08.329: INFO: Pod "pod-configmaps-25850d96-6c36-41ba-842b-45dc18533d42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047437421s
Jan 30 14:32:10.345: INFO: Pod "pod-configmaps-25850d96-6c36-41ba-842b-45dc18533d42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063560078s
Jan 30 14:32:12.388: INFO: Pod "pod-configmaps-25850d96-6c36-41ba-842b-45dc18533d42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105782307s
Jan 30 14:32:14.405: INFO: Pod "pod-configmaps-25850d96-6c36-41ba-842b-45dc18533d42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.123496832s
STEP: Saw pod success
Jan 30 14:32:14.405: INFO: Pod "pod-configmaps-25850d96-6c36-41ba-842b-45dc18533d42" satisfied condition "success or failure"
Jan 30 14:32:14.412: INFO: Trying to get logs from node iruya-node pod pod-configmaps-25850d96-6c36-41ba-842b-45dc18533d42 container configmap-volume-test: 
STEP: delete the pod
Jan 30 14:32:14.490: INFO: Waiting for pod pod-configmaps-25850d96-6c36-41ba-842b-45dc18533d42 to disappear
Jan 30 14:32:14.551: INFO: Pod pod-configmaps-25850d96-6c36-41ba-842b-45dc18533d42 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:32:14.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2605" for this suite.
Jan 30 14:32:20.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:32:20.733: INFO: namespace configmap-2605 deletion completed in 6.15396406s

• [SLOW TEST:16.702 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:32:20.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:32:26.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5277" for this suite.
Jan 30 14:32:32.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:32:32.494: INFO: namespace watch-5277 deletion completed in 6.182759903s

• [SLOW TEST:11.761 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:32:32.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 30 14:32:41.213: INFO: Successfully updated pod "labelsupdate4c1e1d60-0b3d-4112-8002-e5650586fd95"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:32:45.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-713" for this suite.
Jan 30 14:33:07.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:33:07.535: INFO: namespace projected-713 deletion completed in 22.176599476s

• [SLOW TEST:35.038 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:33:07.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0130 14:33:10.938676       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 30 14:33:10.938: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:33:10.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6868" for this suite.
Jan 30 14:33:16.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:33:17.073: INFO: namespace gc-6868 deletion completed in 6.129994687s

• [SLOW TEST:9.538 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:33:17.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-6bbc7c72-0d56-46f9-8e51-ed50c25ac414
STEP: Creating a pod to test consume configMaps
Jan 30 14:33:17.253: INFO: Waiting up to 5m0s for pod "pod-configmaps-3070c1d9-1537-4d8d-9524-c46789ad7b5f" in namespace "configmap-1030" to be "success or failure"
Jan 30 14:33:17.275: INFO: Pod "pod-configmaps-3070c1d9-1537-4d8d-9524-c46789ad7b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.513961ms
Jan 30 14:33:19.288: INFO: Pod "pod-configmaps-3070c1d9-1537-4d8d-9524-c46789ad7b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034364954s
Jan 30 14:33:21.298: INFO: Pod "pod-configmaps-3070c1d9-1537-4d8d-9524-c46789ad7b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044511893s
Jan 30 14:33:23.310: INFO: Pod "pod-configmaps-3070c1d9-1537-4d8d-9524-c46789ad7b5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057000715s
Jan 30 14:33:25.323: INFO: Pod "pod-configmaps-3070c1d9-1537-4d8d-9524-c46789ad7b5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06984706s
STEP: Saw pod success
Jan 30 14:33:25.323: INFO: Pod "pod-configmaps-3070c1d9-1537-4d8d-9524-c46789ad7b5f" satisfied condition "success or failure"
Jan 30 14:33:25.328: INFO: Trying to get logs from node iruya-node pod pod-configmaps-3070c1d9-1537-4d8d-9524-c46789ad7b5f container configmap-volume-test: 
STEP: delete the pod
Jan 30 14:33:25.396: INFO: Waiting for pod pod-configmaps-3070c1d9-1537-4d8d-9524-c46789ad7b5f to disappear
Jan 30 14:33:25.402: INFO: Pod pod-configmaps-3070c1d9-1537-4d8d-9524-c46789ad7b5f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:33:25.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1030" for this suite.
Jan 30 14:33:31.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:33:31.708: INFO: namespace configmap-1030 deletion completed in 6.296416319s

• [SLOW TEST:14.634 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:33:31.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 30 14:33:31.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5749'
Jan 30 14:33:31.999: INFO: stderr: ""
Jan 30 14:33:32.000: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Jan 30 14:33:32.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5749'
Jan 30 14:33:36.605: INFO: stderr: ""
Jan 30 14:33:36.605: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:33:36.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5749" for this suite.
Jan 30 14:33:42.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:33:42.742: INFO: namespace kubectl-5749 deletion completed in 6.127205082s

• [SLOW TEST:11.033 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:33:42.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-3cb624a2-f2d9-4d19-a46e-8946022c2ffe
STEP: Creating a pod to test consume configMaps
Jan 30 14:33:42.839: INFO: Waiting up to 5m0s for pod "pod-configmaps-761f79eb-acb6-4a5b-a783-be056a77fb42" in namespace "configmap-8589" to be "success or failure"
Jan 30 14:33:42.857: INFO: Pod "pod-configmaps-761f79eb-acb6-4a5b-a783-be056a77fb42": Phase="Pending", Reason="", readiness=false. Elapsed: 17.912477ms
Jan 30 14:33:44.873: INFO: Pod "pod-configmaps-761f79eb-acb6-4a5b-a783-be056a77fb42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03448839s
Jan 30 14:33:47.717: INFO: Pod "pod-configmaps-761f79eb-acb6-4a5b-a783-be056a77fb42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.878449945s
Jan 30 14:33:49.726: INFO: Pod "pod-configmaps-761f79eb-acb6-4a5b-a783-be056a77fb42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.886573681s
Jan 30 14:33:51.735: INFO: Pod "pod-configmaps-761f79eb-acb6-4a5b-a783-be056a77fb42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.89586069s
Jan 30 14:33:53.746: INFO: Pod "pod-configmaps-761f79eb-acb6-4a5b-a783-be056a77fb42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.907273754s
STEP: Saw pod success
Jan 30 14:33:53.746: INFO: Pod "pod-configmaps-761f79eb-acb6-4a5b-a783-be056a77fb42" satisfied condition "success or failure"
Jan 30 14:33:53.752: INFO: Trying to get logs from node iruya-node pod pod-configmaps-761f79eb-acb6-4a5b-a783-be056a77fb42 container configmap-volume-test: 
STEP: delete the pod
Jan 30 14:33:53.837: INFO: Waiting for pod pod-configmaps-761f79eb-acb6-4a5b-a783-be056a77fb42 to disappear
Jan 30 14:33:53.962: INFO: Pod pod-configmaps-761f79eb-acb6-4a5b-a783-be056a77fb42 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:33:53.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8589" for this suite.
Jan 30 14:34:00.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:34:00.153: INFO: namespace configmap-8589 deletion completed in 6.180135316s

• [SLOW TEST:17.410 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:34:00.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan 30 14:34:00.264: INFO: Waiting up to 5m0s for pod "client-containers-56b9e925-fdfe-40bc-9d01-9bbdc93a6057" in namespace "containers-2695" to be "success or failure"
Jan 30 14:34:00.279: INFO: Pod "client-containers-56b9e925-fdfe-40bc-9d01-9bbdc93a6057": Phase="Pending", Reason="", readiness=false. Elapsed: 14.45429ms
Jan 30 14:34:02.288: INFO: Pod "client-containers-56b9e925-fdfe-40bc-9d01-9bbdc93a6057": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023487024s
Jan 30 14:34:04.296: INFO: Pod "client-containers-56b9e925-fdfe-40bc-9d01-9bbdc93a6057": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031987223s
Jan 30 14:34:06.305: INFO: Pod "client-containers-56b9e925-fdfe-40bc-9d01-9bbdc93a6057": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040748455s
Jan 30 14:34:08.320: INFO: Pod "client-containers-56b9e925-fdfe-40bc-9d01-9bbdc93a6057": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055394542s
Jan 30 14:34:10.328: INFO: Pod "client-containers-56b9e925-fdfe-40bc-9d01-9bbdc93a6057": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06376613s
STEP: Saw pod success
Jan 30 14:34:10.328: INFO: Pod "client-containers-56b9e925-fdfe-40bc-9d01-9bbdc93a6057" satisfied condition "success or failure"
Jan 30 14:34:10.331: INFO: Trying to get logs from node iruya-node pod client-containers-56b9e925-fdfe-40bc-9d01-9bbdc93a6057 container test-container: 
STEP: delete the pod
Jan 30 14:34:10.442: INFO: Waiting for pod client-containers-56b9e925-fdfe-40bc-9d01-9bbdc93a6057 to disappear
Jan 30 14:34:10.455: INFO: Pod client-containers-56b9e925-fdfe-40bc-9d01-9bbdc93a6057 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:34:10.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2695" for this suite.
Jan 30 14:34:16.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:34:16.636: INFO: namespace containers-2695 deletion completed in 6.170232098s

• [SLOW TEST:16.483 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:34:16.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:34:26.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7860" for this suite.
Jan 30 14:35:09.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:35:09.149: INFO: namespace kubelet-test-7860 deletion completed in 42.160090797s

• [SLOW TEST:52.511 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:35:09.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 30 14:35:09.206: INFO: Waiting up to 5m0s for pod "pod-e1908e0b-0e64-497d-aced-12b7d3861206" in namespace "emptydir-610" to be "success or failure"
Jan 30 14:35:09.240: INFO: Pod "pod-e1908e0b-0e64-497d-aced-12b7d3861206": Phase="Pending", Reason="", readiness=false. Elapsed: 33.609666ms
Jan 30 14:35:11.248: INFO: Pod "pod-e1908e0b-0e64-497d-aced-12b7d3861206": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041658856s
Jan 30 14:35:13.258: INFO: Pod "pod-e1908e0b-0e64-497d-aced-12b7d3861206": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051753489s
Jan 30 14:35:15.268: INFO: Pod "pod-e1908e0b-0e64-497d-aced-12b7d3861206": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061667836s
Jan 30 14:35:17.277: INFO: Pod "pod-e1908e0b-0e64-497d-aced-12b7d3861206": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069979174s
Jan 30 14:35:19.290: INFO: Pod "pod-e1908e0b-0e64-497d-aced-12b7d3861206": Phase="Running", Reason="", readiness=true. Elapsed: 10.082992367s
Jan 30 14:35:21.300: INFO: Pod "pod-e1908e0b-0e64-497d-aced-12b7d3861206": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.093710928s
STEP: Saw pod success
Jan 30 14:35:21.300: INFO: Pod "pod-e1908e0b-0e64-497d-aced-12b7d3861206" satisfied condition "success or failure"
Jan 30 14:35:21.306: INFO: Trying to get logs from node iruya-node pod pod-e1908e0b-0e64-497d-aced-12b7d3861206 container test-container: 
STEP: delete the pod
Jan 30 14:35:21.663: INFO: Waiting for pod pod-e1908e0b-0e64-497d-aced-12b7d3861206 to disappear
Jan 30 14:35:21.673: INFO: Pod pod-e1908e0b-0e64-497d-aced-12b7d3861206 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:35:21.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-610" for this suite.
Jan 30 14:35:27.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:35:27.968: INFO: namespace emptydir-610 deletion completed in 6.286600996s

• [SLOW TEST:18.819 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:35:27.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jan 30 14:35:28.150: INFO: Waiting up to 5m0s for pod "var-expansion-a7d2fbd4-7706-432d-863e-930b62783130" in namespace "var-expansion-6452" to be "success or failure"
Jan 30 14:35:28.170: INFO: Pod "var-expansion-a7d2fbd4-7706-432d-863e-930b62783130": Phase="Pending", Reason="", readiness=false. Elapsed: 20.607394ms
Jan 30 14:35:30.182: INFO: Pod "var-expansion-a7d2fbd4-7706-432d-863e-930b62783130": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032382249s
Jan 30 14:35:32.192: INFO: Pod "var-expansion-a7d2fbd4-7706-432d-863e-930b62783130": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042473109s
Jan 30 14:35:34.201: INFO: Pod "var-expansion-a7d2fbd4-7706-432d-863e-930b62783130": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050761146s
Jan 30 14:35:36.217: INFO: Pod "var-expansion-a7d2fbd4-7706-432d-863e-930b62783130": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066658503s
STEP: Saw pod success
Jan 30 14:35:36.217: INFO: Pod "var-expansion-a7d2fbd4-7706-432d-863e-930b62783130" satisfied condition "success or failure"
Jan 30 14:35:36.230: INFO: Trying to get logs from node iruya-node pod var-expansion-a7d2fbd4-7706-432d-863e-930b62783130 container dapi-container: 
STEP: delete the pod
Jan 30 14:35:36.329: INFO: Waiting for pod var-expansion-a7d2fbd4-7706-432d-863e-930b62783130 to disappear
Jan 30 14:35:36.335: INFO: Pod var-expansion-a7d2fbd4-7706-432d-863e-930b62783130 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:35:36.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6452" for this suite.
Jan 30 14:35:42.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:35:42.501: INFO: namespace var-expansion-6452 deletion completed in 6.158685989s

• [SLOW TEST:14.533 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:35:42.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-4275
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4275 to expose endpoints map[]
Jan 30 14:35:42.796: INFO: successfully validated that service endpoint-test2 in namespace services-4275 exposes endpoints map[] (36.371503ms elapsed)
STEP: Creating pod pod1 in namespace services-4275
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4275 to expose endpoints map[pod1:[80]]
Jan 30 14:35:46.924: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.118577773s elapsed, will retry)
Jan 30 14:35:52.139: INFO: successfully validated that service endpoint-test2 in namespace services-4275 exposes endpoints map[pod1:[80]] (9.333639684s elapsed)
STEP: Creating pod pod2 in namespace services-4275
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4275 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 30 14:35:56.771: INFO: Unexpected endpoints: found map[06c449d7-5809-47ed-9522-3f719981dcf9:[80]], expected map[pod1:[80] pod2:[80]] (4.619341202s elapsed, will retry)
Jan 30 14:35:59.820: INFO: successfully validated that service endpoint-test2 in namespace services-4275 exposes endpoints map[pod1:[80] pod2:[80]] (7.66762365s elapsed)
STEP: Deleting pod pod1 in namespace services-4275
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4275 to expose endpoints map[pod2:[80]]
Jan 30 14:35:59.910: INFO: successfully validated that service endpoint-test2 in namespace services-4275 exposes endpoints map[pod2:[80]] (61.847522ms elapsed)
STEP: Deleting pod pod2 in namespace services-4275
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4275 to expose endpoints map[]
Jan 30 14:35:59.949: INFO: successfully validated that service endpoint-test2 in namespace services-4275 exposes endpoints map[] (30.093074ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:36:00.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4275" for this suite.
Jan 30 14:36:22.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:36:22.216: INFO: namespace services-4275 deletion completed in 22.138806845s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:39.714 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:36:22.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 30 14:36:22.307: INFO: Waiting up to 5m0s for pod "pod-a4466d61-17a0-4d47-a72a-8435774fac19" in namespace "emptydir-5410" to be "success or failure"
Jan 30 14:36:22.311: INFO: Pod "pod-a4466d61-17a0-4d47-a72a-8435774fac19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33449ms
Jan 30 14:36:24.319: INFO: Pod "pod-a4466d61-17a0-4d47-a72a-8435774fac19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012411417s
Jan 30 14:36:26.330: INFO: Pod "pod-a4466d61-17a0-4d47-a72a-8435774fac19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023185124s
Jan 30 14:36:28.338: INFO: Pod "pod-a4466d61-17a0-4d47-a72a-8435774fac19": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031405275s
Jan 30 14:36:30.347: INFO: Pod "pod-a4466d61-17a0-4d47-a72a-8435774fac19": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0404153s
Jan 30 14:36:32.356: INFO: Pod "pod-a4466d61-17a0-4d47-a72a-8435774fac19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049215525s
STEP: Saw pod success
Jan 30 14:36:32.356: INFO: Pod "pod-a4466d61-17a0-4d47-a72a-8435774fac19" satisfied condition "success or failure"
Jan 30 14:36:32.361: INFO: Trying to get logs from node iruya-node pod pod-a4466d61-17a0-4d47-a72a-8435774fac19 container test-container: 
STEP: delete the pod
Jan 30 14:36:32.522: INFO: Waiting for pod pod-a4466d61-17a0-4d47-a72a-8435774fac19 to disappear
Jan 30 14:36:32.634: INFO: Pod pod-a4466d61-17a0-4d47-a72a-8435774fac19 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:36:32.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5410" for this suite.
Jan 30 14:36:38.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:36:38.812: INFO: namespace emptydir-5410 deletion completed in 6.16059587s

• [SLOW TEST:16.596 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:36:38.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-6e51cdb5-ed83-4d0d-b635-dba0f4944b91
STEP: Creating a pod to test consume configMaps
Jan 30 14:36:38.948: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cd8bd686-60c5-4f8d-b54d-a23438b81f11" in namespace "projected-8531" to be "success or failure"
Jan 30 14:36:38.964: INFO: Pod "pod-projected-configmaps-cd8bd686-60c5-4f8d-b54d-a23438b81f11": Phase="Pending", Reason="", readiness=false. Elapsed: 16.361342ms
Jan 30 14:36:40.976: INFO: Pod "pod-projected-configmaps-cd8bd686-60c5-4f8d-b54d-a23438b81f11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02786056s
Jan 30 14:36:43.005: INFO: Pod "pod-projected-configmaps-cd8bd686-60c5-4f8d-b54d-a23438b81f11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05702135s
Jan 30 14:36:45.013: INFO: Pod "pod-projected-configmaps-cd8bd686-60c5-4f8d-b54d-a23438b81f11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064506375s
Jan 30 14:36:47.042: INFO: Pod "pod-projected-configmaps-cd8bd686-60c5-4f8d-b54d-a23438b81f11": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093727482s
Jan 30 14:36:49.058: INFO: Pod "pod-projected-configmaps-cd8bd686-60c5-4f8d-b54d-a23438b81f11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109692118s
STEP: Saw pod success
Jan 30 14:36:49.058: INFO: Pod "pod-projected-configmaps-cd8bd686-60c5-4f8d-b54d-a23438b81f11" satisfied condition "success or failure"
Jan 30 14:36:49.062: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-cd8bd686-60c5-4f8d-b54d-a23438b81f11 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 30 14:36:49.154: INFO: Waiting for pod pod-projected-configmaps-cd8bd686-60c5-4f8d-b54d-a23438b81f11 to disappear
Jan 30 14:36:49.249: INFO: Pod pod-projected-configmaps-cd8bd686-60c5-4f8d-b54d-a23438b81f11 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:36:49.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8531" for this suite.
Jan 30 14:36:55.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:36:55.430: INFO: namespace projected-8531 deletion completed in 6.175141123s

• [SLOW TEST:16.618 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:36:55.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-m7d4
STEP: Creating a pod to test atomic-volume-subpath
Jan 30 14:36:55.590: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-m7d4" in namespace "subpath-6045" to be "success or failure"
Jan 30 14:36:55.611: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Pending", Reason="", readiness=false. Elapsed: 21.018884ms
Jan 30 14:36:57.663: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072850035s
Jan 30 14:36:59.672: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08129698s
Jan 30 14:37:01.680: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089398331s
Jan 30 14:37:03.697: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106891482s
Jan 30 14:37:05.709: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Running", Reason="", readiness=true. Elapsed: 10.118318129s
Jan 30 14:37:07.729: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Running", Reason="", readiness=true. Elapsed: 12.138627505s
Jan 30 14:37:09.739: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Running", Reason="", readiness=true. Elapsed: 14.148352877s
Jan 30 14:37:11.749: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Running", Reason="", readiness=true. Elapsed: 16.158762984s
Jan 30 14:37:13.765: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Running", Reason="", readiness=true. Elapsed: 18.175075149s
Jan 30 14:37:15.772: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Running", Reason="", readiness=true. Elapsed: 20.181837557s
Jan 30 14:37:17.803: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Running", Reason="", readiness=true. Elapsed: 22.212918679s
Jan 30 14:37:19.836: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Running", Reason="", readiness=true. Elapsed: 24.246180932s
Jan 30 14:37:21.846: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Running", Reason="", readiness=true. Elapsed: 26.255902806s
Jan 30 14:37:23.867: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Running", Reason="", readiness=true. Elapsed: 28.276542755s
Jan 30 14:37:25.879: INFO: Pod "pod-subpath-test-secret-m7d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.288599246s
STEP: Saw pod success
Jan 30 14:37:25.879: INFO: Pod "pod-subpath-test-secret-m7d4" satisfied condition "success or failure"
Jan 30 14:37:25.884: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-m7d4 container test-container-subpath-secret-m7d4: 
STEP: delete the pod
Jan 30 14:37:25.965: INFO: Waiting for pod pod-subpath-test-secret-m7d4 to disappear
Jan 30 14:37:25.974: INFO: Pod pod-subpath-test-secret-m7d4 no longer exists
STEP: Deleting pod pod-subpath-test-secret-m7d4
Jan 30 14:37:25.974: INFO: Deleting pod "pod-subpath-test-secret-m7d4" in namespace "subpath-6045"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:37:25.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6045" for this suite.
Jan 30 14:37:32.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:37:32.165: INFO: namespace subpath-6045 deletion completed in 6.18349784s

• [SLOW TEST:36.735 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:37:32.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 30 14:37:32.330: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab61bee7-3204-4de7-adc2-3f356ff6e88a" in namespace "projected-5421" to be "success or failure"
Jan 30 14:37:32.337: INFO: Pod "downwardapi-volume-ab61bee7-3204-4de7-adc2-3f356ff6e88a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.927767ms
Jan 30 14:37:34.347: INFO: Pod "downwardapi-volume-ab61bee7-3204-4de7-adc2-3f356ff6e88a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017296114s
Jan 30 14:37:36.372: INFO: Pod "downwardapi-volume-ab61bee7-3204-4de7-adc2-3f356ff6e88a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042674431s
Jan 30 14:37:38.384: INFO: Pod "downwardapi-volume-ab61bee7-3204-4de7-adc2-3f356ff6e88a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054621117s
Jan 30 14:37:40.394: INFO: Pod "downwardapi-volume-ab61bee7-3204-4de7-adc2-3f356ff6e88a": Phase="Running", Reason="", readiness=true. Elapsed: 8.064635916s
Jan 30 14:37:42.404: INFO: Pod "downwardapi-volume-ab61bee7-3204-4de7-adc2-3f356ff6e88a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073980044s
STEP: Saw pod success
Jan 30 14:37:42.404: INFO: Pod "downwardapi-volume-ab61bee7-3204-4de7-adc2-3f356ff6e88a" satisfied condition "success or failure"
Jan 30 14:37:42.410: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ab61bee7-3204-4de7-adc2-3f356ff6e88a container client-container: 
STEP: delete the pod
Jan 30 14:37:42.493: INFO: Waiting for pod downwardapi-volume-ab61bee7-3204-4de7-adc2-3f356ff6e88a to disappear
Jan 30 14:37:42.520: INFO: Pod downwardapi-volume-ab61bee7-3204-4de7-adc2-3f356ff6e88a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:37:42.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5421" for this suite.
Jan 30 14:37:48.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:37:48.817: INFO: namespace projected-5421 deletion completed in 6.157094923s

• [SLOW TEST:16.652 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:37:48.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 30 14:37:48.906: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 30 14:37:48.916: INFO: Waiting for terminating namespaces to be deleted...
Jan 30 14:37:48.919: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 30 14:37:48.928: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 30 14:37:48.929: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 14:37:48.929: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 30 14:37:48.929: INFO: 	Container weave ready: true, restart count 0
Jan 30 14:37:48.929: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 14:37:48.929: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 30 14:37:48.938: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 30 14:37:48.938: INFO: 	Container coredns ready: true, restart count 0
Jan 30 14:37:48.938: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 30 14:37:48.938: INFO: 	Container etcd ready: true, restart count 0
Jan 30 14:37:48.938: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 30 14:37:48.938: INFO: 	Container weave ready: true, restart count 0
Jan 30 14:37:48.938: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 14:37:48.938: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 30 14:37:48.938: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 30 14:37:48.938: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 30 14:37:48.938: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 14:37:48.938: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 30 14:37:48.938: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 30 14:37:48.938: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 30 14:37:48.938: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 30 14:37:48.938: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 30 14:37:48.938: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Jan 30 14:37:49.192: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 30 14:37:49.193: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 30 14:37:49.193: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 30 14:37:49.193: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Jan 30 14:37:49.193: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Jan 30 14:37:49.193: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 30 14:37:49.193: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Jan 30 14:37:49.193: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 30 14:37:49.193: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Jan 30 14:37:49.193: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-166b6c3b-2b24-4959-a471-bddb19a28c46.15eeb0f136f24ae0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1557/filler-pod-166b6c3b-2b24-4959-a471-bddb19a28c46 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-166b6c3b-2b24-4959-a471-bddb19a28c46.15eeb0f27d964c0b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-166b6c3b-2b24-4959-a471-bddb19a28c46.15eeb0f36e4b7ebd], Reason = [Created], Message = [Created container filler-pod-166b6c3b-2b24-4959-a471-bddb19a28c46]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-166b6c3b-2b24-4959-a471-bddb19a28c46.15eeb0f389c6b0bb], Reason = [Started], Message = [Started container filler-pod-166b6c3b-2b24-4959-a471-bddb19a28c46]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1f9be41a-7e16-48c5-8b43-b5d3be9b22aa.15eeb0f1373d01dd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1557/filler-pod-1f9be41a-7e16-48c5-8b43-b5d3be9b22aa to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1f9be41a-7e16-48c5-8b43-b5d3be9b22aa.15eeb0f28035225c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1f9be41a-7e16-48c5-8b43-b5d3be9b22aa.15eeb0f364c08fcb], Reason = [Created], Message = [Created container filler-pod-1f9be41a-7e16-48c5-8b43-b5d3be9b22aa]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1f9be41a-7e16-48c5-8b43-b5d3be9b22aa.15eeb0f3838a485b], Reason = [Started], Message = [Started container filler-pod-1f9be41a-7e16-48c5-8b43-b5d3be9b22aa]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15eeb0f40c0725cf], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:38:02.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1557" for this suite.
Jan 30 14:38:10.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:38:10.839: INFO: namespace sched-pred-1557 deletion completed in 8.140296474s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:22.022 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:38:10.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 30 14:38:10.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4926'
Jan 30 14:38:11.176: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 30 14:38:11.176: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan 30 14:38:11.245: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-t7v68]
Jan 30 14:38:11.246: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-t7v68" in namespace "kubectl-4926" to be "running and ready"
Jan 30 14:38:11.307: INFO: Pod "e2e-test-nginx-rc-t7v68": Phase="Pending", Reason="", readiness=false. Elapsed: 61.676364ms
Jan 30 14:38:13.323: INFO: Pod "e2e-test-nginx-rc-t7v68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077215603s
Jan 30 14:38:15.330: INFO: Pod "e2e-test-nginx-rc-t7v68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084633031s
Jan 30 14:38:17.340: INFO: Pod "e2e-test-nginx-rc-t7v68": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094453727s
Jan 30 14:38:19.354: INFO: Pod "e2e-test-nginx-rc-t7v68": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108557846s
Jan 30 14:38:21.366: INFO: Pod "e2e-test-nginx-rc-t7v68": Phase="Running", Reason="", readiness=true. Elapsed: 10.120558813s
Jan 30 14:38:21.366: INFO: Pod "e2e-test-nginx-rc-t7v68" satisfied condition "running and ready"
Jan 30 14:38:21.366: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-t7v68]
Jan 30 14:38:21.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-4926'
Jan 30 14:38:21.551: INFO: stderr: ""
Jan 30 14:38:21.551: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Jan 30 14:38:21.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4926'
Jan 30 14:38:21.700: INFO: stderr: ""
Jan 30 14:38:21.700: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:38:21.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4926" for this suite.
Jan 30 14:38:43.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:38:43.945: INFO: namespace kubectl-4926 deletion completed in 22.223185848s

• [SLOW TEST:33.105 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:38:43.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-470e81e1-6199-45ca-a003-8752d18be692
STEP: Creating secret with name secret-projected-all-test-volume-d3b5c0cf-e20d-4a25-b486-2e3cbc0e06a7
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 30 14:38:44.141: INFO: Waiting up to 5m0s for pod "projected-volume-0c9d0f22-b116-4322-8f81-4f750fa202ab" in namespace "projected-8739" to be "success or failure"
Jan 30 14:38:44.154: INFO: Pod "projected-volume-0c9d0f22-b116-4322-8f81-4f750fa202ab": Phase="Pending", Reason="", readiness=false. Elapsed: 12.222033ms
Jan 30 14:38:46.160: INFO: Pod "projected-volume-0c9d0f22-b116-4322-8f81-4f750fa202ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018733819s
Jan 30 14:38:48.170: INFO: Pod "projected-volume-0c9d0f22-b116-4322-8f81-4f750fa202ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028386627s
Jan 30 14:38:50.184: INFO: Pod "projected-volume-0c9d0f22-b116-4322-8f81-4f750fa202ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042425226s
Jan 30 14:38:52.192: INFO: Pod "projected-volume-0c9d0f22-b116-4322-8f81-4f750fa202ab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050534152s
Jan 30 14:38:54.201: INFO: Pod "projected-volume-0c9d0f22-b116-4322-8f81-4f750fa202ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05906602s
STEP: Saw pod success
Jan 30 14:38:54.201: INFO: Pod "projected-volume-0c9d0f22-b116-4322-8f81-4f750fa202ab" satisfied condition "success or failure"
Jan 30 14:38:54.205: INFO: Trying to get logs from node iruya-node pod projected-volume-0c9d0f22-b116-4322-8f81-4f750fa202ab container projected-all-volume-test: 
STEP: delete the pod
Jan 30 14:38:54.421: INFO: Waiting for pod projected-volume-0c9d0f22-b116-4322-8f81-4f750fa202ab to disappear
Jan 30 14:38:54.427: INFO: Pod projected-volume-0c9d0f22-b116-4322-8f81-4f750fa202ab no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:38:54.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8739" for this suite.
Jan 30 14:39:00.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:39:00.593: INFO: namespace projected-8739 deletion completed in 6.160397165s

• [SLOW TEST:16.648 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:39:00.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 30 14:39:00.753: INFO: Waiting up to 5m0s for pod "pod-52756cbb-978b-49a4-8def-a44e8d3362cc" in namespace "emptydir-8618" to be "success or failure"
Jan 30 14:39:00.763: INFO: Pod "pod-52756cbb-978b-49a4-8def-a44e8d3362cc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.595066ms
Jan 30 14:39:02.773: INFO: Pod "pod-52756cbb-978b-49a4-8def-a44e8d3362cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020117371s
Jan 30 14:39:04.784: INFO: Pod "pod-52756cbb-978b-49a4-8def-a44e8d3362cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03056321s
Jan 30 14:39:06.795: INFO: Pod "pod-52756cbb-978b-49a4-8def-a44e8d3362cc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041310862s
Jan 30 14:39:08.806: INFO: Pod "pod-52756cbb-978b-49a4-8def-a44e8d3362cc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052494933s
Jan 30 14:39:10.819: INFO: Pod "pod-52756cbb-978b-49a4-8def-a44e8d3362cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065720248s
STEP: Saw pod success
Jan 30 14:39:10.819: INFO: Pod "pod-52756cbb-978b-49a4-8def-a44e8d3362cc" satisfied condition "success or failure"
Jan 30 14:39:10.826: INFO: Trying to get logs from node iruya-node pod pod-52756cbb-978b-49a4-8def-a44e8d3362cc container test-container: 
STEP: delete the pod
Jan 30 14:39:11.140: INFO: Waiting for pod pod-52756cbb-978b-49a4-8def-a44e8d3362cc to disappear
Jan 30 14:39:11.144: INFO: Pod pod-52756cbb-978b-49a4-8def-a44e8d3362cc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:39:11.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8618" for this suite.
Jan 30 14:39:17.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:39:17.291: INFO: namespace emptydir-8618 deletion completed in 6.140627786s

• [SLOW TEST:16.698 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:39:17.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-25ba31a9-19a5-49e1-a75f-dc8d210fc667
STEP: Creating a pod to test consume secrets
Jan 30 14:39:17.438: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6c8e95d9-b256-4a7e-ae8b-fe7ca03ed69a" in namespace "projected-2091" to be "success or failure"
Jan 30 14:39:17.453: INFO: Pod "pod-projected-secrets-6c8e95d9-b256-4a7e-ae8b-fe7ca03ed69a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.68067ms
Jan 30 14:39:19.464: INFO: Pod "pod-projected-secrets-6c8e95d9-b256-4a7e-ae8b-fe7ca03ed69a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026173401s
Jan 30 14:39:21.476: INFO: Pod "pod-projected-secrets-6c8e95d9-b256-4a7e-ae8b-fe7ca03ed69a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037941303s
Jan 30 14:39:23.490: INFO: Pod "pod-projected-secrets-6c8e95d9-b256-4a7e-ae8b-fe7ca03ed69a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052054684s
Jan 30 14:39:25.499: INFO: Pod "pod-projected-secrets-6c8e95d9-b256-4a7e-ae8b-fe7ca03ed69a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060873607s
Jan 30 14:39:27.507: INFO: Pod "pod-projected-secrets-6c8e95d9-b256-4a7e-ae8b-fe7ca03ed69a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068972175s
STEP: Saw pod success
Jan 30 14:39:27.507: INFO: Pod "pod-projected-secrets-6c8e95d9-b256-4a7e-ae8b-fe7ca03ed69a" satisfied condition "success or failure"
Jan 30 14:39:27.511: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-6c8e95d9-b256-4a7e-ae8b-fe7ca03ed69a container projected-secret-volume-test: 
STEP: delete the pod
Jan 30 14:39:27.671: INFO: Waiting for pod pod-projected-secrets-6c8e95d9-b256-4a7e-ae8b-fe7ca03ed69a to disappear
Jan 30 14:39:27.685: INFO: Pod pod-projected-secrets-6c8e95d9-b256-4a7e-ae8b-fe7ca03ed69a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:39:27.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2091" for this suite.
Jan 30 14:39:33.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:39:33.890: INFO: namespace projected-2091 deletion completed in 6.200300199s

• [SLOW TEST:16.598 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:39:33.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-799e9f8a-e3a7-45a5-a5c0-592dcdae549b
STEP: Creating a pod to test consume secrets
Jan 30 14:39:34.019: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-016b1c04-0d6a-4f2c-bad3-d7187e3259ca" in namespace "projected-1083" to be "success or failure"
Jan 30 14:39:34.042: INFO: Pod "pod-projected-secrets-016b1c04-0d6a-4f2c-bad3-d7187e3259ca": Phase="Pending", Reason="", readiness=false. Elapsed: 22.706887ms
Jan 30 14:39:36.050: INFO: Pod "pod-projected-secrets-016b1c04-0d6a-4f2c-bad3-d7187e3259ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03040995s
Jan 30 14:39:38.057: INFO: Pod "pod-projected-secrets-016b1c04-0d6a-4f2c-bad3-d7187e3259ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037933594s
Jan 30 14:39:40.065: INFO: Pod "pod-projected-secrets-016b1c04-0d6a-4f2c-bad3-d7187e3259ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045401853s
Jan 30 14:39:42.075: INFO: Pod "pod-projected-secrets-016b1c04-0d6a-4f2c-bad3-d7187e3259ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055069484s
STEP: Saw pod success
Jan 30 14:39:42.075: INFO: Pod "pod-projected-secrets-016b1c04-0d6a-4f2c-bad3-d7187e3259ca" satisfied condition "success or failure"
Jan 30 14:39:42.080: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-016b1c04-0d6a-4f2c-bad3-d7187e3259ca container projected-secret-volume-test: 
STEP: delete the pod
Jan 30 14:39:42.137: INFO: Waiting for pod pod-projected-secrets-016b1c04-0d6a-4f2c-bad3-d7187e3259ca to disappear
Jan 30 14:39:42.145: INFO: Pod pod-projected-secrets-016b1c04-0d6a-4f2c-bad3-d7187e3259ca no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:39:42.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1083" for this suite.
Jan 30 14:39:48.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:39:48.294: INFO: namespace projected-1083 deletion completed in 6.144520861s

• [SLOW TEST:14.403 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:39:48.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-19acc851-ae45-46e0-bee3-e8a6a746dc3d
STEP: Creating a pod to test consume configMaps
Jan 30 14:39:48.406: INFO: Waiting up to 5m0s for pod "pod-configmaps-49addaae-87b1-45f1-98d8-a1d30e69d01f" in namespace "configmap-9902" to be "success or failure"
Jan 30 14:39:48.413: INFO: Pod "pod-configmaps-49addaae-87b1-45f1-98d8-a1d30e69d01f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439679ms
Jan 30 14:39:50.450: INFO: Pod "pod-configmaps-49addaae-87b1-45f1-98d8-a1d30e69d01f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043691829s
Jan 30 14:39:52.464: INFO: Pod "pod-configmaps-49addaae-87b1-45f1-98d8-a1d30e69d01f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057436981s
Jan 30 14:39:54.481: INFO: Pod "pod-configmaps-49addaae-87b1-45f1-98d8-a1d30e69d01f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074571363s
Jan 30 14:39:56.494: INFO: Pod "pod-configmaps-49addaae-87b1-45f1-98d8-a1d30e69d01f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087397503s
Jan 30 14:39:58.511: INFO: Pod "pod-configmaps-49addaae-87b1-45f1-98d8-a1d30e69d01f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105086729s
STEP: Saw pod success
Jan 30 14:39:58.512: INFO: Pod "pod-configmaps-49addaae-87b1-45f1-98d8-a1d30e69d01f" satisfied condition "success or failure"
Jan 30 14:39:58.519: INFO: Trying to get logs from node iruya-node pod pod-configmaps-49addaae-87b1-45f1-98d8-a1d30e69d01f container configmap-volume-test: 
STEP: delete the pod
Jan 30 14:39:58.657: INFO: Waiting for pod pod-configmaps-49addaae-87b1-45f1-98d8-a1d30e69d01f to disappear
Jan 30 14:39:58.666: INFO: Pod pod-configmaps-49addaae-87b1-45f1-98d8-a1d30e69d01f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:39:58.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9902" for this suite.
Jan 30 14:40:04.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:40:04.843: INFO: namespace configmap-9902 deletion completed in 6.171087817s

• [SLOW TEST:16.549 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:40:04.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 30 14:40:04.994: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d03b9476-01e6-4a7e-a23d-4d53c0f15a71" in namespace "downward-api-4668" to be "success or failure"
Jan 30 14:40:05.006: INFO: Pod "downwardapi-volume-d03b9476-01e6-4a7e-a23d-4d53c0f15a71": Phase="Pending", Reason="", readiness=false. Elapsed: 11.552281ms
Jan 30 14:40:07.015: INFO: Pod "downwardapi-volume-d03b9476-01e6-4a7e-a23d-4d53c0f15a71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020415969s
Jan 30 14:40:09.023: INFO: Pod "downwardapi-volume-d03b9476-01e6-4a7e-a23d-4d53c0f15a71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028506081s
Jan 30 14:40:11.033: INFO: Pod "downwardapi-volume-d03b9476-01e6-4a7e-a23d-4d53c0f15a71": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038797262s
Jan 30 14:40:13.042: INFO: Pod "downwardapi-volume-d03b9476-01e6-4a7e-a23d-4d53c0f15a71": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047856743s
Jan 30 14:40:15.052: INFO: Pod "downwardapi-volume-d03b9476-01e6-4a7e-a23d-4d53c0f15a71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057157428s
STEP: Saw pod success
Jan 30 14:40:15.052: INFO: Pod "downwardapi-volume-d03b9476-01e6-4a7e-a23d-4d53c0f15a71" satisfied condition "success or failure"
Jan 30 14:40:15.055: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d03b9476-01e6-4a7e-a23d-4d53c0f15a71 container client-container: 
STEP: delete the pod
Jan 30 14:40:15.203: INFO: Waiting for pod downwardapi-volume-d03b9476-01e6-4a7e-a23d-4d53c0f15a71 to disappear
Jan 30 14:40:15.214: INFO: Pod downwardapi-volume-d03b9476-01e6-4a7e-a23d-4d53c0f15a71 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:40:15.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4668" for this suite.
Jan 30 14:40:21.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:40:21.363: INFO: namespace downward-api-4668 deletion completed in 6.137948099s

• [SLOW TEST:16.520 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:40:21.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-43165ded-7811-4835-9058-c047048530ed
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-43165ded-7811-4835-9058-c047048530ed
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:41:47.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6286" for this suite.
Jan 30 14:42:09.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:42:09.584: INFO: namespace configmap-6286 deletion completed in 22.236205123s

• [SLOW TEST:108.221 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:42:09.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 30 14:42:09.787: INFO: Waiting up to 5m0s for pod "downward-api-231ba1b8-b4e0-4c53-a463-678823e9ff53" in namespace "downward-api-3444" to be "success or failure"
Jan 30 14:42:09.810: INFO: Pod "downward-api-231ba1b8-b4e0-4c53-a463-678823e9ff53": Phase="Pending", Reason="", readiness=false. Elapsed: 22.010162ms
Jan 30 14:42:11.822: INFO: Pod "downward-api-231ba1b8-b4e0-4c53-a463-678823e9ff53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034333637s
Jan 30 14:42:13.833: INFO: Pod "downward-api-231ba1b8-b4e0-4c53-a463-678823e9ff53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045239984s
Jan 30 14:42:15.860: INFO: Pod "downward-api-231ba1b8-b4e0-4c53-a463-678823e9ff53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072518276s
Jan 30 14:42:17.880: INFO: Pod "downward-api-231ba1b8-b4e0-4c53-a463-678823e9ff53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.09267721s
STEP: Saw pod success
Jan 30 14:42:17.881: INFO: Pod "downward-api-231ba1b8-b4e0-4c53-a463-678823e9ff53" satisfied condition "success or failure"
Jan 30 14:42:17.884: INFO: Trying to get logs from node iruya-node pod downward-api-231ba1b8-b4e0-4c53-a463-678823e9ff53 container dapi-container: 
STEP: delete the pod
Jan 30 14:42:17.980: INFO: Waiting for pod downward-api-231ba1b8-b4e0-4c53-a463-678823e9ff53 to disappear
Jan 30 14:42:17.987: INFO: Pod downward-api-231ba1b8-b4e0-4c53-a463-678823e9ff53 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:42:17.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3444" for this suite.
Jan 30 14:42:24.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:42:24.181: INFO: namespace downward-api-3444 deletion completed in 6.158970804s

• [SLOW TEST:14.597 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:42:24.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 30 14:42:24.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-9493'
Jan 30 14:42:26.154: INFO: stderr: ""
Jan 30 14:42:26.154: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 30 14:42:36.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-9493 -o json'
Jan 30 14:42:36.426: INFO: stderr: ""
Jan 30 14:42:36.427: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-30T14:42:26Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-9493\",\n        \"resourceVersion\": \"22451787\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-9493/pods/e2e-test-nginx-pod\",\n        \"uid\": \"5b358a68-9af2-4a82-95b0-7420280f34b4\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-b7rsk\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-b7rsk\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-b7rsk\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-30T14:42:26Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-30T14:42:33Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-30T14:42:33Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-30T14:42:26Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://3f13d6cae0b8429c366acbb8f0b822d721932dfb9954e47b75833b4a2c0ae7e3\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-30T14:42:32Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-30T14:42:26Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 30 14:42:36.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9493'
Jan 30 14:42:36.882: INFO: stderr: ""
Jan 30 14:42:36.883: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jan 30 14:42:36.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9493'
Jan 30 14:42:44.740: INFO: stderr: ""
Jan 30 14:42:44.741: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:42:44.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9493" for this suite.
Jan 30 14:42:50.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:42:50.935: INFO: namespace kubectl-9493 deletion completed in 6.182726843s

• [SLOW TEST:26.753 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:42:50.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:43:51.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5280" for this suite.
Jan 30 14:44:13.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:44:13.251: INFO: namespace container-probe-5280 deletion completed in 22.137874029s

• [SLOW TEST:82.315 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:44:13.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 30 14:44:13.412: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0d95109-3565-45f7-aa5a-87ae8dc41ee3" in namespace "projected-660" to be "success or failure"
Jan 30 14:44:13.421: INFO: Pod "downwardapi-volume-a0d95109-3565-45f7-aa5a-87ae8dc41ee3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.009899ms
Jan 30 14:44:15.435: INFO: Pod "downwardapi-volume-a0d95109-3565-45f7-aa5a-87ae8dc41ee3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022871976s
Jan 30 14:44:17.449: INFO: Pod "downwardapi-volume-a0d95109-3565-45f7-aa5a-87ae8dc41ee3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037140342s
Jan 30 14:44:19.460: INFO: Pod "downwardapi-volume-a0d95109-3565-45f7-aa5a-87ae8dc41ee3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047659247s
Jan 30 14:44:21.468: INFO: Pod "downwardapi-volume-a0d95109-3565-45f7-aa5a-87ae8dc41ee3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055788865s
Jan 30 14:44:23.478: INFO: Pod "downwardapi-volume-a0d95109-3565-45f7-aa5a-87ae8dc41ee3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06623969s
STEP: Saw pod success
Jan 30 14:44:23.478: INFO: Pod "downwardapi-volume-a0d95109-3565-45f7-aa5a-87ae8dc41ee3" satisfied condition "success or failure"
Jan 30 14:44:23.484: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a0d95109-3565-45f7-aa5a-87ae8dc41ee3 container client-container: 
STEP: delete the pod
Jan 30 14:44:23.635: INFO: Waiting for pod downwardapi-volume-a0d95109-3565-45f7-aa5a-87ae8dc41ee3 to disappear
Jan 30 14:44:23.685: INFO: Pod downwardapi-volume-a0d95109-3565-45f7-aa5a-87ae8dc41ee3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:44:23.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-660" for this suite.
Jan 30 14:44:29.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:44:29.850: INFO: namespace projected-660 deletion completed in 6.157543894s

• [SLOW TEST:16.599 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:44:29.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 30 14:44:30.002: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f68486e-bdb9-48b5-8f8b-0aa00b6f51f1" in namespace "downward-api-8828" to be "success or failure"
Jan 30 14:44:30.009: INFO: Pod "downwardapi-volume-7f68486e-bdb9-48b5-8f8b-0aa00b6f51f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.293434ms
Jan 30 14:44:32.024: INFO: Pod "downwardapi-volume-7f68486e-bdb9-48b5-8f8b-0aa00b6f51f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021686401s
Jan 30 14:44:34.033: INFO: Pod "downwardapi-volume-7f68486e-bdb9-48b5-8f8b-0aa00b6f51f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030597164s
Jan 30 14:44:36.044: INFO: Pod "downwardapi-volume-7f68486e-bdb9-48b5-8f8b-0aa00b6f51f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041327805s
Jan 30 14:44:38.059: INFO: Pod "downwardapi-volume-7f68486e-bdb9-48b5-8f8b-0aa00b6f51f1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056567091s
Jan 30 14:44:40.077: INFO: Pod "downwardapi-volume-7f68486e-bdb9-48b5-8f8b-0aa00b6f51f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07513584s
STEP: Saw pod success
Jan 30 14:44:40.078: INFO: Pod "downwardapi-volume-7f68486e-bdb9-48b5-8f8b-0aa00b6f51f1" satisfied condition "success or failure"
Jan 30 14:44:40.087: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7f68486e-bdb9-48b5-8f8b-0aa00b6f51f1 container client-container: 
STEP: delete the pod
Jan 30 14:44:40.171: INFO: Waiting for pod downwardapi-volume-7f68486e-bdb9-48b5-8f8b-0aa00b6f51f1 to disappear
Jan 30 14:44:40.216: INFO: Pod downwardapi-volume-7f68486e-bdb9-48b5-8f8b-0aa00b6f51f1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:44:40.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8828" for this suite.
Jan 30 14:44:46.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:44:46.370: INFO: namespace downward-api-8828 deletion completed in 6.148587307s

• [SLOW TEST:16.519 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:44:46.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 30 14:44:46.513: INFO: Waiting up to 5m0s for pod "pod-3d83d9bd-f63e-4749-8faf-37a13f4be2e3" in namespace "emptydir-8694" to be "success or failure"
Jan 30 14:44:46.536: INFO: Pod "pod-3d83d9bd-f63e-4749-8faf-37a13f4be2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 22.881922ms
Jan 30 14:44:48.752: INFO: Pod "pod-3d83d9bd-f63e-4749-8faf-37a13f4be2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239225377s
Jan 30 14:44:50.767: INFO: Pod "pod-3d83d9bd-f63e-4749-8faf-37a13f4be2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253830536s
Jan 30 14:44:52.781: INFO: Pod "pod-3d83d9bd-f63e-4749-8faf-37a13f4be2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267794297s
Jan 30 14:44:54.802: INFO: Pod "pod-3d83d9bd-f63e-4749-8faf-37a13f4be2e3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.288337998s
Jan 30 14:44:56.817: INFO: Pod "pod-3d83d9bd-f63e-4749-8faf-37a13f4be2e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.303930098s
STEP: Saw pod success
Jan 30 14:44:56.817: INFO: Pod "pod-3d83d9bd-f63e-4749-8faf-37a13f4be2e3" satisfied condition "success or failure"
Jan 30 14:44:56.824: INFO: Trying to get logs from node iruya-node pod pod-3d83d9bd-f63e-4749-8faf-37a13f4be2e3 container test-container: 
STEP: delete the pod
Jan 30 14:44:57.014: INFO: Waiting for pod pod-3d83d9bd-f63e-4749-8faf-37a13f4be2e3 to disappear
Jan 30 14:44:57.025: INFO: Pod pod-3d83d9bd-f63e-4749-8faf-37a13f4be2e3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:44:57.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8694" for this suite.
Jan 30 14:45:03.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:45:03.179: INFO: namespace emptydir-8694 deletion completed in 6.15018268s

• [SLOW TEST:16.809 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:45:03.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-9149aa78-d4b0-4f0c-9338-f2a6d14b22be
STEP: Creating a pod to test consume secrets
Jan 30 14:45:03.305: INFO: Waiting up to 5m0s for pod "pod-secrets-7dc000cb-fd39-4c0f-92c9-5d019c3e7d46" in namespace "secrets-3432" to be "success or failure"
Jan 30 14:45:03.318: INFO: Pod "pod-secrets-7dc000cb-fd39-4c0f-92c9-5d019c3e7d46": Phase="Pending", Reason="", readiness=false. Elapsed: 12.93606ms
Jan 30 14:45:05.326: INFO: Pod "pod-secrets-7dc000cb-fd39-4c0f-92c9-5d019c3e7d46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020294165s
Jan 30 14:45:07.341: INFO: Pod "pod-secrets-7dc000cb-fd39-4c0f-92c9-5d019c3e7d46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035453297s
Jan 30 14:45:09.348: INFO: Pod "pod-secrets-7dc000cb-fd39-4c0f-92c9-5d019c3e7d46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042494864s
Jan 30 14:45:11.357: INFO: Pod "pod-secrets-7dc000cb-fd39-4c0f-92c9-5d019c3e7d46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051387744s
STEP: Saw pod success
Jan 30 14:45:11.357: INFO: Pod "pod-secrets-7dc000cb-fd39-4c0f-92c9-5d019c3e7d46" satisfied condition "success or failure"
Jan 30 14:45:11.363: INFO: Trying to get logs from node iruya-node pod pod-secrets-7dc000cb-fd39-4c0f-92c9-5d019c3e7d46 container secret-volume-test: 
STEP: delete the pod
Jan 30 14:45:11.427: INFO: Waiting for pod pod-secrets-7dc000cb-fd39-4c0f-92c9-5d019c3e7d46 to disappear
Jan 30 14:45:11.519: INFO: Pod pod-secrets-7dc000cb-fd39-4c0f-92c9-5d019c3e7d46 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:45:11.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3432" for this suite.
Jan 30 14:45:17.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:45:17.930: INFO: namespace secrets-3432 deletion completed in 6.401040848s

• [SLOW TEST:14.750 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:45:17.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 30 14:45:18.031: INFO: Waiting up to 5m0s for pod "downward-api-e26134f7-8d75-4e0d-89df-6bc907fd5982" in namespace "downward-api-2676" to be "success or failure"
Jan 30 14:45:18.051: INFO: Pod "downward-api-e26134f7-8d75-4e0d-89df-6bc907fd5982": Phase="Pending", Reason="", readiness=false. Elapsed: 20.434045ms
Jan 30 14:45:20.060: INFO: Pod "downward-api-e26134f7-8d75-4e0d-89df-6bc907fd5982": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029354228s
Jan 30 14:45:22.068: INFO: Pod "downward-api-e26134f7-8d75-4e0d-89df-6bc907fd5982": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036725894s
Jan 30 14:45:24.075: INFO: Pod "downward-api-e26134f7-8d75-4e0d-89df-6bc907fd5982": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044190742s
Jan 30 14:45:26.094: INFO: Pod "downward-api-e26134f7-8d75-4e0d-89df-6bc907fd5982": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062647667s
Jan 30 14:45:28.102: INFO: Pod "downward-api-e26134f7-8d75-4e0d-89df-6bc907fd5982": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071304495s
STEP: Saw pod success
Jan 30 14:45:28.102: INFO: Pod "downward-api-e26134f7-8d75-4e0d-89df-6bc907fd5982" satisfied condition "success or failure"
Jan 30 14:45:28.124: INFO: Trying to get logs from node iruya-node pod downward-api-e26134f7-8d75-4e0d-89df-6bc907fd5982 container dapi-container: 
STEP: delete the pod
Jan 30 14:45:28.323: INFO: Waiting for pod downward-api-e26134f7-8d75-4e0d-89df-6bc907fd5982 to disappear
Jan 30 14:45:28.417: INFO: Pod downward-api-e26134f7-8d75-4e0d-89df-6bc907fd5982 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:45:28.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2676" for this suite.
Jan 30 14:45:34.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:45:34.586: INFO: namespace downward-api-2676 deletion completed in 6.158026732s

• [SLOW TEST:16.656 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:45:34.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-13ad0624-0638-444c-a59e-50433ff1ec8a
STEP: Creating a pod to test consume configMaps
Jan 30 14:45:34.737: INFO: Waiting up to 5m0s for pod "pod-configmaps-44c09228-ee3b-4455-acc4-dbb93fc4d311" in namespace "configmap-6000" to be "success or failure"
Jan 30 14:45:34.745: INFO: Pod "pod-configmaps-44c09228-ee3b-4455-acc4-dbb93fc4d311": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101559ms
Jan 30 14:45:36.751: INFO: Pod "pod-configmaps-44c09228-ee3b-4455-acc4-dbb93fc4d311": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014210876s
Jan 30 14:45:38.760: INFO: Pod "pod-configmaps-44c09228-ee3b-4455-acc4-dbb93fc4d311": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022952967s
Jan 30 14:45:40.777: INFO: Pod "pod-configmaps-44c09228-ee3b-4455-acc4-dbb93fc4d311": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039882925s
Jan 30 14:45:42.789: INFO: Pod "pod-configmaps-44c09228-ee3b-4455-acc4-dbb93fc4d311": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052197541s
Jan 30 14:45:44.797: INFO: Pod "pod-configmaps-44c09228-ee3b-4455-acc4-dbb93fc4d311": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059656322s
STEP: Saw pod success
Jan 30 14:45:44.797: INFO: Pod "pod-configmaps-44c09228-ee3b-4455-acc4-dbb93fc4d311" satisfied condition "success or failure"
Jan 30 14:45:44.800: INFO: Trying to get logs from node iruya-node pod pod-configmaps-44c09228-ee3b-4455-acc4-dbb93fc4d311 container configmap-volume-test: 
STEP: delete the pod
Jan 30 14:45:44.930: INFO: Waiting for pod pod-configmaps-44c09228-ee3b-4455-acc4-dbb93fc4d311 to disappear
Jan 30 14:45:44.943: INFO: Pod pod-configmaps-44c09228-ee3b-4455-acc4-dbb93fc4d311 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:45:44.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6000" for this suite.
Jan 30 14:45:50.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:45:51.112: INFO: namespace configmap-6000 deletion completed in 6.159678188s

• [SLOW TEST:16.525 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:45:51.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 30 14:45:51.203: INFO: Waiting up to 5m0s for pod "pod-35016ca6-9764-4232-a70f-39dc88ad7a22" in namespace "emptydir-5008" to be "success or failure"
Jan 30 14:45:51.209: INFO: Pod "pod-35016ca6-9764-4232-a70f-39dc88ad7a22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063354ms
Jan 30 14:45:53.216: INFO: Pod "pod-35016ca6-9764-4232-a70f-39dc88ad7a22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013326215s
Jan 30 14:45:55.220: INFO: Pod "pod-35016ca6-9764-4232-a70f-39dc88ad7a22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017872926s
Jan 30 14:45:57.238: INFO: Pod "pod-35016ca6-9764-4232-a70f-39dc88ad7a22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035076421s
Jan 30 14:45:59.245: INFO: Pod "pod-35016ca6-9764-4232-a70f-39dc88ad7a22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042587081s
STEP: Saw pod success
Jan 30 14:45:59.245: INFO: Pod "pod-35016ca6-9764-4232-a70f-39dc88ad7a22" satisfied condition "success or failure"
Jan 30 14:45:59.249: INFO: Trying to get logs from node iruya-node pod pod-35016ca6-9764-4232-a70f-39dc88ad7a22 container test-container: 
STEP: delete the pod
Jan 30 14:45:59.369: INFO: Waiting for pod pod-35016ca6-9764-4232-a70f-39dc88ad7a22 to disappear
Jan 30 14:45:59.390: INFO: Pod pod-35016ca6-9764-4232-a70f-39dc88ad7a22 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:45:59.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5008" for this suite.
Jan 30 14:46:05.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:46:05.587: INFO: namespace emptydir-5008 deletion completed in 6.185937296s

• [SLOW TEST:14.475 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:46:05.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan 30 14:46:05.731: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2631" to be "success or failure"
Jan 30 14:46:05.763: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 31.741517ms
Jan 30 14:46:07.775: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044306392s
Jan 30 14:46:09.791: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060183903s
Jan 30 14:46:11.814: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082660265s
Jan 30 14:46:13.944: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.213485186s
Jan 30 14:46:15.952: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221347189s
Jan 30 14:46:17.960: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 12.228759218s
Jan 30 14:46:19.968: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.237320503s
STEP: Saw pod success
Jan 30 14:46:19.968: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 30 14:46:19.975: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 30 14:46:20.021: INFO: Waiting for pod pod-host-path-test to disappear
Jan 30 14:46:20.036: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:46:20.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-2631" for this suite.
Jan 30 14:46:26.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:46:26.298: INFO: namespace hostpath-2631 deletion completed in 6.255638453s

• [SLOW TEST:20.711 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:46:26.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 30 14:46:26.970: INFO: Waiting up to 5m0s for pod "pod-b293a8b0-46ba-4be0-ab2f-55eadedd471c" in namespace "emptydir-308" to be "success or failure"
Jan 30 14:46:26.979: INFO: Pod "pod-b293a8b0-46ba-4be0-ab2f-55eadedd471c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.185406ms
Jan 30 14:46:28.990: INFO: Pod "pod-b293a8b0-46ba-4be0-ab2f-55eadedd471c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019204659s
Jan 30 14:46:30.999: INFO: Pod "pod-b293a8b0-46ba-4be0-ab2f-55eadedd471c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028891057s
Jan 30 14:46:33.064: INFO: Pod "pod-b293a8b0-46ba-4be0-ab2f-55eadedd471c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093420084s
Jan 30 14:46:35.074: INFO: Pod "pod-b293a8b0-46ba-4be0-ab2f-55eadedd471c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10338774s
Jan 30 14:46:37.090: INFO: Pod "pod-b293a8b0-46ba-4be0-ab2f-55eadedd471c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.120052958s
STEP: Saw pod success
Jan 30 14:46:37.091: INFO: Pod "pod-b293a8b0-46ba-4be0-ab2f-55eadedd471c" satisfied condition "success or failure"
Jan 30 14:46:37.096: INFO: Trying to get logs from node iruya-node pod pod-b293a8b0-46ba-4be0-ab2f-55eadedd471c container test-container: 
STEP: delete the pod
Jan 30 14:46:37.157: INFO: Waiting for pod pod-b293a8b0-46ba-4be0-ab2f-55eadedd471c to disappear
Jan 30 14:46:37.248: INFO: Pod pod-b293a8b0-46ba-4be0-ab2f-55eadedd471c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:46:37.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-308" for this suite.
Jan 30 14:46:43.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:46:43.445: INFO: namespace emptydir-308 deletion completed in 6.189368273s

• [SLOW TEST:17.147 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:46:43.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 30 14:46:43.535: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.006583ms)
Jan 30 14:46:43.587: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 52.430175ms)
Jan 30 14:46:43.602: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.354281ms)
Jan 30 14:46:43.612: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.898511ms)
Jan 30 14:46:43.620: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.076506ms)
Jan 30 14:46:43.627: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.360401ms)
Jan 30 14:46:43.633: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.743473ms)
Jan 30 14:46:43.644: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.194518ms)
Jan 30 14:46:43.652: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.748147ms)
Jan 30 14:46:43.673: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.8061ms)
Jan 30 14:46:43.688: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.483448ms)
Jan 30 14:46:43.698: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.406412ms)
Jan 30 14:46:43.705: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.639018ms)
Jan 30 14:46:43.711: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.098164ms)
Jan 30 14:46:43.719: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.972108ms)
Jan 30 14:46:43.723: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.349817ms)
Jan 30 14:46:43.727: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.159572ms)
Jan 30 14:46:43.731: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.020212ms)
Jan 30 14:46:43.736: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.559929ms)
Jan 30 14:46:43.740: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.992084ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:46:43.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2984" for this suite.
Jan 30 14:46:49.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:46:49.951: INFO: namespace proxy-2984 deletion completed in 6.207498602s

• [SLOW TEST:6.505 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:46:49.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8412
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 30 14:46:50.034: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 30 14:47:28.277: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-8412 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 14:47:28.277: INFO: >>> kubeConfig: /root/.kube/config
I0130 14:47:29.057691       8 log.go:172] (0xc0017588f0) (0xc000fce000) Create stream
I0130 14:47:29.057949       8 log.go:172] (0xc0017588f0) (0xc000fce000) Stream added, broadcasting: 1
I0130 14:47:29.074382       8 log.go:172] (0xc0017588f0) Reply frame received for 1
I0130 14:47:29.074431       8 log.go:172] (0xc0017588f0) (0xc002e3ad20) Create stream
I0130 14:47:29.074443       8 log.go:172] (0xc0017588f0) (0xc002e3ad20) Stream added, broadcasting: 3
I0130 14:47:29.078028       8 log.go:172] (0xc0017588f0) Reply frame received for 3
I0130 14:47:29.078151       8 log.go:172] (0xc0017588f0) (0xc00058f040) Create stream
I0130 14:47:29.078179       8 log.go:172] (0xc0017588f0) (0xc00058f040) Stream added, broadcasting: 5
I0130 14:47:29.083115       8 log.go:172] (0xc0017588f0) Reply frame received for 5
I0130 14:47:29.257383       8 log.go:172] (0xc0017588f0) Data frame received for 3
I0130 14:47:29.257515       8 log.go:172] (0xc002e3ad20) (3) Data frame handling
I0130 14:47:29.257547       8 log.go:172] (0xc002e3ad20) (3) Data frame sent
I0130 14:47:29.449174       8 log.go:172] (0xc0017588f0) Data frame received for 1
I0130 14:47:29.449365       8 log.go:172] (0xc0017588f0) (0xc002e3ad20) Stream removed, broadcasting: 3
I0130 14:47:29.449512       8 log.go:172] (0xc000fce000) (1) Data frame handling
I0130 14:47:29.449552       8 log.go:172] (0xc0017588f0) (0xc00058f040) Stream removed, broadcasting: 5
I0130 14:47:29.449647       8 log.go:172] (0xc000fce000) (1) Data frame sent
I0130 14:47:29.449668       8 log.go:172] (0xc0017588f0) (0xc000fce000) Stream removed, broadcasting: 1
I0130 14:47:29.449691       8 log.go:172] (0xc0017588f0) Go away received
I0130 14:47:29.449984       8 log.go:172] (0xc0017588f0) (0xc000fce000) Stream removed, broadcasting: 1
I0130 14:47:29.450009       8 log.go:172] (0xc0017588f0) (0xc002e3ad20) Stream removed, broadcasting: 3
I0130 14:47:29.450027       8 log.go:172] (0xc0017588f0) (0xc00058f040) Stream removed, broadcasting: 5
Jan 30 14:47:29.450: INFO: Waiting for endpoints: map[]
Jan 30 14:47:29.458: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-8412 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 14:47:29.458: INFO: >>> kubeConfig: /root/.kube/config
I0130 14:47:29.528344       8 log.go:172] (0xc001759340) (0xc000fceaa0) Create stream
I0130 14:47:29.528556       8 log.go:172] (0xc001759340) (0xc000fceaa0) Stream added, broadcasting: 1
I0130 14:47:29.551162       8 log.go:172] (0xc001759340) Reply frame received for 1
I0130 14:47:29.551446       8 log.go:172] (0xc001759340) (0xc00269a000) Create stream
I0130 14:47:29.551481       8 log.go:172] (0xc001759340) (0xc00269a000) Stream added, broadcasting: 3
I0130 14:47:29.559234       8 log.go:172] (0xc001759340) Reply frame received for 3
I0130 14:47:29.559388       8 log.go:172] (0xc001759340) (0xc000fcebe0) Create stream
I0130 14:47:29.559419       8 log.go:172] (0xc001759340) (0xc000fcebe0) Stream added, broadcasting: 5
I0130 14:47:29.561392       8 log.go:172] (0xc001759340) Reply frame received for 5
I0130 14:47:29.679664       8 log.go:172] (0xc001759340) Data frame received for 3
I0130 14:47:29.679854       8 log.go:172] (0xc00269a000) (3) Data frame handling
I0130 14:47:29.679894       8 log.go:172] (0xc00269a000) (3) Data frame sent
I0130 14:47:29.838869       8 log.go:172] (0xc001759340) Data frame received for 1
I0130 14:47:29.839038       8 log.go:172] (0xc000fceaa0) (1) Data frame handling
I0130 14:47:29.839065       8 log.go:172] (0xc000fceaa0) (1) Data frame sent
I0130 14:47:29.839347       8 log.go:172] (0xc001759340) (0xc00269a000) Stream removed, broadcasting: 3
I0130 14:47:29.839572       8 log.go:172] (0xc001759340) (0xc000fceaa0) Stream removed, broadcasting: 1
I0130 14:47:29.839863       8 log.go:172] (0xc001759340) (0xc000fcebe0) Stream removed, broadcasting: 5
I0130 14:47:29.839912       8 log.go:172] (0xc001759340) Go away received
I0130 14:47:29.840083       8 log.go:172] (0xc001759340) (0xc000fceaa0) Stream removed, broadcasting: 1
I0130 14:47:29.840093       8 log.go:172] (0xc001759340) (0xc00269a000) Stream removed, broadcasting: 3
I0130 14:47:29.840098       8 log.go:172] (0xc001759340) (0xc000fcebe0) Stream removed, broadcasting: 5
Jan 30 14:47:29.840: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:47:29.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8412" for this suite.
Jan 30 14:47:55.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:47:55.996: INFO: namespace pod-network-test-8412 deletion completed in 26.144472778s

• [SLOW TEST:66.044 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:47:55.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 30 14:48:06.715: INFO: Successfully updated pod "annotationupdate81f058c5-89fa-44f4-9825-f4d0c011d805"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:48:08.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8039" for this suite.
Jan 30 14:48:38.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:48:38.991: INFO: namespace downward-api-8039 deletion completed in 30.161048393s

• [SLOW TEST:42.995 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:48:38.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 30 14:48:39.143: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc04f181-9af1-4523-9dd1-015b816ac666" in namespace "projected-974" to be "success or failure"
Jan 30 14:48:39.185: INFO: Pod "downwardapi-volume-fc04f181-9af1-4523-9dd1-015b816ac666": Phase="Pending", Reason="", readiness=false. Elapsed: 40.963997ms
Jan 30 14:48:41.195: INFO: Pod "downwardapi-volume-fc04f181-9af1-4523-9dd1-015b816ac666": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050963541s
Jan 30 14:48:43.201: INFO: Pod "downwardapi-volume-fc04f181-9af1-4523-9dd1-015b816ac666": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057434809s
Jan 30 14:48:45.216: INFO: Pod "downwardapi-volume-fc04f181-9af1-4523-9dd1-015b816ac666": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072591768s
Jan 30 14:48:47.226: INFO: Pod "downwardapi-volume-fc04f181-9af1-4523-9dd1-015b816ac666": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082711209s
Jan 30 14:48:49.236: INFO: Pod "downwardapi-volume-fc04f181-9af1-4523-9dd1-015b816ac666": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092894168s
STEP: Saw pod success
Jan 30 14:48:49.237: INFO: Pod "downwardapi-volume-fc04f181-9af1-4523-9dd1-015b816ac666" satisfied condition "success or failure"
Jan 30 14:48:49.243: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-fc04f181-9af1-4523-9dd1-015b816ac666 container client-container: 
STEP: delete the pod
Jan 30 14:48:49.359: INFO: Waiting for pod downwardapi-volume-fc04f181-9af1-4523-9dd1-015b816ac666 to disappear
Jan 30 14:48:49.366: INFO: Pod downwardapi-volume-fc04f181-9af1-4523-9dd1-015b816ac666 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:48:49.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-974" for this suite.
Jan 30 14:48:55.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:48:55.536: INFO: namespace projected-974 deletion completed in 6.163629884s

• [SLOW TEST:16.544 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:48:55.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 30 14:48:55.709: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7957,SelfLink:/api/v1/namespaces/watch-7957/configmaps/e2e-watch-test-watch-closed,UID:109c95cd-1ad8-4087-966b-599f3c8cb47e,ResourceVersion:22452680,Generation:0,CreationTimestamp:2020-01-30 14:48:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 30 14:48:55.710: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7957,SelfLink:/api/v1/namespaces/watch-7957/configmaps/e2e-watch-test-watch-closed,UID:109c95cd-1ad8-4087-966b-599f3c8cb47e,ResourceVersion:22452681,Generation:0,CreationTimestamp:2020-01-30 14:48:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 30 14:48:55.801: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7957,SelfLink:/api/v1/namespaces/watch-7957/configmaps/e2e-watch-test-watch-closed,UID:109c95cd-1ad8-4087-966b-599f3c8cb47e,ResourceVersion:22452682,Generation:0,CreationTimestamp:2020-01-30 14:48:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 30 14:48:55.802: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7957,SelfLink:/api/v1/namespaces/watch-7957/configmaps/e2e-watch-test-watch-closed,UID:109c95cd-1ad8-4087-966b-599f3c8cb47e,ResourceVersion:22452683,Generation:0,CreationTimestamp:2020-01-30 14:48:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:48:55.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7957" for this suite.
Jan 30 14:49:01.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:49:01.954: INFO: namespace watch-7957 deletion completed in 6.141580478s

• [SLOW TEST:6.417 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:49:01.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-0c9a19c7-a17c-40a9-8e2c-54c9dfbf47b6
STEP: Creating a pod to test consume secrets
Jan 30 14:49:02.165: INFO: Waiting up to 5m0s for pod "pod-secrets-29b8c3a2-fed0-4281-9fda-517b7cfbc4c9" in namespace "secrets-2241" to be "success or failure"
Jan 30 14:49:02.172: INFO: Pod "pod-secrets-29b8c3a2-fed0-4281-9fda-517b7cfbc4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.847484ms
Jan 30 14:49:04.181: INFO: Pod "pod-secrets-29b8c3a2-fed0-4281-9fda-517b7cfbc4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015582197s
Jan 30 14:49:06.198: INFO: Pod "pod-secrets-29b8c3a2-fed0-4281-9fda-517b7cfbc4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032297115s
Jan 30 14:49:08.205: INFO: Pod "pod-secrets-29b8c3a2-fed0-4281-9fda-517b7cfbc4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039993395s
Jan 30 14:49:10.215: INFO: Pod "pod-secrets-29b8c3a2-fed0-4281-9fda-517b7cfbc4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049377401s
Jan 30 14:49:12.224: INFO: Pod "pod-secrets-29b8c3a2-fed0-4281-9fda-517b7cfbc4c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058182654s
STEP: Saw pod success
Jan 30 14:49:12.224: INFO: Pod "pod-secrets-29b8c3a2-fed0-4281-9fda-517b7cfbc4c9" satisfied condition "success or failure"
Jan 30 14:49:12.229: INFO: Trying to get logs from node iruya-node pod pod-secrets-29b8c3a2-fed0-4281-9fda-517b7cfbc4c9 container secret-volume-test: 
STEP: delete the pod
Jan 30 14:49:12.300: INFO: Waiting for pod pod-secrets-29b8c3a2-fed0-4281-9fda-517b7cfbc4c9 to disappear
Jan 30 14:49:12.319: INFO: Pod pod-secrets-29b8c3a2-fed0-4281-9fda-517b7cfbc4c9 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:49:12.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2241" for this suite.
Jan 30 14:49:18.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:49:18.652: INFO: namespace secrets-2241 deletion completed in 6.316376895s

• [SLOW TEST:16.698 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:49:18.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:49:49.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8268" for this suite.
Jan 30 14:49:55.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:49:55.529: INFO: namespace namespaces-8268 deletion completed in 6.140037505s
STEP: Destroying namespace "nsdeletetest-5938" for this suite.
Jan 30 14:49:55.532: INFO: Namespace nsdeletetest-5938 was already deleted
STEP: Destroying namespace "nsdeletetest-7166" for this suite.
Jan 30 14:50:01.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:50:01.746: INFO: namespace nsdeletetest-7166 deletion completed in 6.214124151s

• [SLOW TEST:43.093 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:50:01.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9067.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9067.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9067.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9067.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 30 14:50:13.979: INFO: File wheezy_udp@dns-test-service-3.dns-9067.svc.cluster.local from pod  dns-9067/dns-test-59c79f03-28b8-4676-9b4b-416d8a1e5196 contains '' instead of 'foo.example.com.'
Jan 30 14:50:14.005: INFO: File jessie_udp@dns-test-service-3.dns-9067.svc.cluster.local from pod  dns-9067/dns-test-59c79f03-28b8-4676-9b4b-416d8a1e5196 contains '' instead of 'foo.example.com.'
Jan 30 14:50:14.005: INFO: Lookups using dns-9067/dns-test-59c79f03-28b8-4676-9b4b-416d8a1e5196 failed for: [wheezy_udp@dns-test-service-3.dns-9067.svc.cluster.local jessie_udp@dns-test-service-3.dns-9067.svc.cluster.local]

Jan 30 14:50:19.027: INFO: DNS probes using dns-test-59c79f03-28b8-4676-9b4b-416d8a1e5196 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9067.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9067.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9067.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9067.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 30 14:50:33.265: INFO: File wheezy_udp@dns-test-service-3.dns-9067.svc.cluster.local from pod  dns-9067/dns-test-5b9d9233-e256-4eed-893d-ad86967ec465 contains '' instead of 'bar.example.com.'
Jan 30 14:50:33.285: INFO: File jessie_udp@dns-test-service-3.dns-9067.svc.cluster.local from pod  dns-9067/dns-test-5b9d9233-e256-4eed-893d-ad86967ec465 contains '' instead of 'bar.example.com.'
Jan 30 14:50:33.286: INFO: Lookups using dns-9067/dns-test-5b9d9233-e256-4eed-893d-ad86967ec465 failed for: [wheezy_udp@dns-test-service-3.dns-9067.svc.cluster.local jessie_udp@dns-test-service-3.dns-9067.svc.cluster.local]

Jan 30 14:50:38.308: INFO: File wheezy_udp@dns-test-service-3.dns-9067.svc.cluster.local from pod  dns-9067/dns-test-5b9d9233-e256-4eed-893d-ad86967ec465 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 30 14:50:38.326: INFO: File jessie_udp@dns-test-service-3.dns-9067.svc.cluster.local from pod  dns-9067/dns-test-5b9d9233-e256-4eed-893d-ad86967ec465 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 30 14:50:38.326: INFO: Lookups using dns-9067/dns-test-5b9d9233-e256-4eed-893d-ad86967ec465 failed for: [wheezy_udp@dns-test-service-3.dns-9067.svc.cluster.local jessie_udp@dns-test-service-3.dns-9067.svc.cluster.local]

Jan 30 14:50:43.325: INFO: File wheezy_udp@dns-test-service-3.dns-9067.svc.cluster.local from pod  dns-9067/dns-test-5b9d9233-e256-4eed-893d-ad86967ec465 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 30 14:50:43.331: INFO: File jessie_udp@dns-test-service-3.dns-9067.svc.cluster.local from pod  dns-9067/dns-test-5b9d9233-e256-4eed-893d-ad86967ec465 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 30 14:50:43.331: INFO: Lookups using dns-9067/dns-test-5b9d9233-e256-4eed-893d-ad86967ec465 failed for: [wheezy_udp@dns-test-service-3.dns-9067.svc.cluster.local jessie_udp@dns-test-service-3.dns-9067.svc.cluster.local]

Jan 30 14:50:48.331: INFO: DNS probes using dns-test-5b9d9233-e256-4eed-893d-ad86967ec465 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9067.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9067.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9067.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9067.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 30 14:51:04.622: INFO: File jessie_udp@dns-test-service-3.dns-9067.svc.cluster.local from pod  dns-9067/dns-test-15bf5965-4da4-44b4-8f70-99cd9ad1082a contains '' instead of '10.97.75.134'
Jan 30 14:51:04.622: INFO: Lookups using dns-9067/dns-test-15bf5965-4da4-44b4-8f70-99cd9ad1082a failed for: [jessie_udp@dns-test-service-3.dns-9067.svc.cluster.local]

Jan 30 14:51:09.643: INFO: DNS probes using dns-test-15bf5965-4da4-44b4-8f70-99cd9ad1082a succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:51:09.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9067" for this suite.
Jan 30 14:51:16.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:51:16.151: INFO: namespace dns-9067 deletion completed in 6.165317218s

• [SLOW TEST:74.405 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:51:16.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5880
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 30 14:51:16.214: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 30 14:51:56.432: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5880 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 14:51:56.432: INFO: >>> kubeConfig: /root/.kube/config
I0130 14:51:56.526579       8 log.go:172] (0xc001ad8c60) (0xc0022825a0) Create stream
I0130 14:51:56.526888       8 log.go:172] (0xc001ad8c60) (0xc0022825a0) Stream added, broadcasting: 1
I0130 14:51:56.543560       8 log.go:172] (0xc001ad8c60) Reply frame received for 1
I0130 14:51:56.543649       8 log.go:172] (0xc001ad8c60) (0xc001fbf900) Create stream
I0130 14:51:56.543666       8 log.go:172] (0xc001ad8c60) (0xc001fbf900) Stream added, broadcasting: 3
I0130 14:51:56.546379       8 log.go:172] (0xc001ad8c60) Reply frame received for 3
I0130 14:51:56.546401       8 log.go:172] (0xc001ad8c60) (0xc0021fb2c0) Create stream
I0130 14:51:56.546414       8 log.go:172] (0xc001ad8c60) (0xc0021fb2c0) Stream added, broadcasting: 5
I0130 14:51:56.550062       8 log.go:172] (0xc001ad8c60) Reply frame received for 5
I0130 14:51:57.826535       8 log.go:172] (0xc001ad8c60) Data frame received for 3
I0130 14:51:57.826694       8 log.go:172] (0xc001fbf900) (3) Data frame handling
I0130 14:51:57.826733       8 log.go:172] (0xc001fbf900) (3) Data frame sent
I0130 14:51:58.045760       8 log.go:172] (0xc001ad8c60) (0xc0021fb2c0) Stream removed, broadcasting: 5
I0130 14:51:58.046029       8 log.go:172] (0xc001ad8c60) Data frame received for 1
I0130 14:51:58.046055       8 log.go:172] (0xc0022825a0) (1) Data frame handling
I0130 14:51:58.046093       8 log.go:172] (0xc0022825a0) (1) Data frame sent
I0130 14:51:58.046178       8 log.go:172] (0xc001ad8c60) (0xc001fbf900) Stream removed, broadcasting: 3
I0130 14:51:58.046305       8 log.go:172] (0xc001ad8c60) (0xc0022825a0) Stream removed, broadcasting: 1
I0130 14:51:58.046454       8 log.go:172] (0xc001ad8c60) Go away received
I0130 14:51:58.046736       8 log.go:172] (0xc001ad8c60) (0xc0022825a0) Stream removed, broadcasting: 1
I0130 14:51:58.046763       8 log.go:172] (0xc001ad8c60) (0xc001fbf900) Stream removed, broadcasting: 3
I0130 14:51:58.046777       8 log.go:172] (0xc001ad8c60) (0xc0021fb2c0) Stream removed, broadcasting: 5
Jan 30 14:51:58.046: INFO: Found all expected endpoints: [netserver-0]
Jan 30 14:51:58.055: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5880 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 14:51:58.055: INFO: >>> kubeConfig: /root/.kube/config
I0130 14:51:58.126814       8 log.go:172] (0xc002fbad10) (0xc001fbfcc0) Create stream
I0130 14:51:58.127099       8 log.go:172] (0xc002fbad10) (0xc001fbfcc0) Stream added, broadcasting: 1
I0130 14:51:58.138848       8 log.go:172] (0xc002fbad10) Reply frame received for 1
I0130 14:51:58.138904       8 log.go:172] (0xc002fbad10) (0xc0021fb360) Create stream
I0130 14:51:58.138915       8 log.go:172] (0xc002fbad10) (0xc0021fb360) Stream added, broadcasting: 3
I0130 14:51:58.140490       8 log.go:172] (0xc002fbad10) Reply frame received for 3
I0130 14:51:58.140514       8 log.go:172] (0xc002fbad10) (0xc001fbfd60) Create stream
I0130 14:51:58.140525       8 log.go:172] (0xc002fbad10) (0xc001fbfd60) Stream added, broadcasting: 5
I0130 14:51:58.142927       8 log.go:172] (0xc002fbad10) Reply frame received for 5
I0130 14:51:59.240912       8 log.go:172] (0xc002fbad10) Data frame received for 3
I0130 14:51:59.241222       8 log.go:172] (0xc0021fb360) (3) Data frame handling
I0130 14:51:59.241290       8 log.go:172] (0xc0021fb360) (3) Data frame sent
I0130 14:51:59.427891       8 log.go:172] (0xc002fbad10) (0xc0021fb360) Stream removed, broadcasting: 3
I0130 14:51:59.428213       8 log.go:172] (0xc002fbad10) (0xc001fbfd60) Stream removed, broadcasting: 5
I0130 14:51:59.428326       8 log.go:172] (0xc002fbad10) Data frame received for 1
I0130 14:51:59.428348       8 log.go:172] (0xc001fbfcc0) (1) Data frame handling
I0130 14:51:59.428374       8 log.go:172] (0xc001fbfcc0) (1) Data frame sent
I0130 14:51:59.428393       8 log.go:172] (0xc002fbad10) (0xc001fbfcc0) Stream removed, broadcasting: 1
I0130 14:51:59.428416       8 log.go:172] (0xc002fbad10) Go away received
I0130 14:51:59.428680       8 log.go:172] (0xc002fbad10) (0xc001fbfcc0) Stream removed, broadcasting: 1
I0130 14:51:59.428707       8 log.go:172] (0xc002fbad10) (0xc0021fb360) Stream removed, broadcasting: 3
I0130 14:51:59.428720       8 log.go:172] (0xc002fbad10) (0xc001fbfd60) Stream removed, broadcasting: 5
Jan 30 14:51:59.428: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:51:59.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5880" for this suite.
Jan 30 14:52:23.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:52:23.634: INFO: namespace pod-network-test-5880 deletion completed in 24.191564885s

• [SLOW TEST:67.482 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:52:23.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 30 14:52:23.850: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cfef712f-03af-49b1-8bdb-f4f0114f3a42" in namespace "downward-api-5332" to be "success or failure"
Jan 30 14:52:23.933: INFO: Pod "downwardapi-volume-cfef712f-03af-49b1-8bdb-f4f0114f3a42": Phase="Pending", Reason="", readiness=false. Elapsed: 82.553446ms
Jan 30 14:52:25.946: INFO: Pod "downwardapi-volume-cfef712f-03af-49b1-8bdb-f4f0114f3a42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095636529s
Jan 30 14:52:27.968: INFO: Pod "downwardapi-volume-cfef712f-03af-49b1-8bdb-f4f0114f3a42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116883104s
Jan 30 14:52:29.975: INFO: Pod "downwardapi-volume-cfef712f-03af-49b1-8bdb-f4f0114f3a42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12469846s
Jan 30 14:52:31.987: INFO: Pod "downwardapi-volume-cfef712f-03af-49b1-8bdb-f4f0114f3a42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13636967s
Jan 30 14:52:34.004: INFO: Pod "downwardapi-volume-cfef712f-03af-49b1-8bdb-f4f0114f3a42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.15362795s
STEP: Saw pod success
Jan 30 14:52:34.005: INFO: Pod "downwardapi-volume-cfef712f-03af-49b1-8bdb-f4f0114f3a42" satisfied condition "success or failure"
Jan 30 14:52:34.016: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cfef712f-03af-49b1-8bdb-f4f0114f3a42 container client-container: 
STEP: delete the pod
Jan 30 14:52:34.252: INFO: Waiting for pod downwardapi-volume-cfef712f-03af-49b1-8bdb-f4f0114f3a42 to disappear
Jan 30 14:52:34.260: INFO: Pod downwardapi-volume-cfef712f-03af-49b1-8bdb-f4f0114f3a42 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:52:34.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5332" for this suite.
Jan 30 14:52:40.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:52:40.432: INFO: namespace downward-api-5332 deletion completed in 6.164016288s

• [SLOW TEST:16.798 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:52:40.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2715.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2715.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2715.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2715.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2715.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2715.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 30 14:52:52.579: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2715/dns-test-62bae8d5-247a-40dc-8533-e4f6a18e6e95: the server could not find the requested resource (get pods dns-test-62bae8d5-247a-40dc-8533-e4f6a18e6e95)
Jan 30 14:52:52.584: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2715/dns-test-62bae8d5-247a-40dc-8533-e4f6a18e6e95: the server could not find the requested resource (get pods dns-test-62bae8d5-247a-40dc-8533-e4f6a18e6e95)
Jan 30 14:52:52.590: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-2715.svc.cluster.local from pod dns-2715/dns-test-62bae8d5-247a-40dc-8533-e4f6a18e6e95: the server could not find the requested resource (get pods dns-test-62bae8d5-247a-40dc-8533-e4f6a18e6e95)
Jan 30 14:52:52.596: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-2715/dns-test-62bae8d5-247a-40dc-8533-e4f6a18e6e95: the server could not find the requested resource (get pods dns-test-62bae8d5-247a-40dc-8533-e4f6a18e6e95)
Jan 30 14:52:52.605: INFO: Unable to read jessie_udp@PodARecord from pod dns-2715/dns-test-62bae8d5-247a-40dc-8533-e4f6a18e6e95: the server could not find the requested resource (get pods dns-test-62bae8d5-247a-40dc-8533-e4f6a18e6e95)
Jan 30 14:52:52.612: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2715/dns-test-62bae8d5-247a-40dc-8533-e4f6a18e6e95: the server could not find the requested resource (get pods dns-test-62bae8d5-247a-40dc-8533-e4f6a18e6e95)
Jan 30 14:52:52.612: INFO: Lookups using dns-2715/dns-test-62bae8d5-247a-40dc-8533-e4f6a18e6e95 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-2715.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 30 14:52:57.687: INFO: DNS probes using dns-2715/dns-test-62bae8d5-247a-40dc-8533-e4f6a18e6e95 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:52:57.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2715" for this suite.
Jan 30 14:53:03.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:53:04.070: INFO: namespace dns-2715 deletion completed in 6.201585903s

• [SLOW TEST:23.637 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:53:04.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 30 14:53:04.216: INFO: Number of nodes with available pods: 0
Jan 30 14:53:04.216: INFO: Node iruya-node is running more than one daemon pod
Jan 30 14:53:05.238: INFO: Number of nodes with available pods: 0
Jan 30 14:53:05.238: INFO: Node iruya-node is running more than one daemon pod
Jan 30 14:53:06.246: INFO: Number of nodes with available pods: 0
Jan 30 14:53:06.247: INFO: Node iruya-node is running more than one daemon pod
Jan 30 14:53:07.235: INFO: Number of nodes with available pods: 0
Jan 30 14:53:07.235: INFO: Node iruya-node is running more than one daemon pod
Jan 30 14:53:08.243: INFO: Number of nodes with available pods: 0
Jan 30 14:53:08.244: INFO: Node iruya-node is running more than one daemon pod
Jan 30 14:53:09.923: INFO: Number of nodes with available pods: 0
Jan 30 14:53:09.923: INFO: Node iruya-node is running more than one daemon pod
Jan 30 14:53:10.613: INFO: Number of nodes with available pods: 0
Jan 30 14:53:10.613: INFO: Node iruya-node is running more than one daemon pod
Jan 30 14:53:11.240: INFO: Number of nodes with available pods: 0
Jan 30 14:53:11.240: INFO: Node iruya-node is running more than one daemon pod
Jan 30 14:53:12.233: INFO: Number of nodes with available pods: 0
Jan 30 14:53:12.233: INFO: Node iruya-node is running more than one daemon pod
Jan 30 14:53:13.237: INFO: Number of nodes with available pods: 0
Jan 30 14:53:13.237: INFO: Node iruya-node is running more than one daemon pod
Jan 30 14:53:14.229: INFO: Number of nodes with available pods: 1
Jan 30 14:53:14.229: INFO: Node iruya-node is running more than one daemon pod
Jan 30 14:53:15.231: INFO: Number of nodes with available pods: 2
Jan 30 14:53:15.231: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 30 14:53:15.306: INFO: Number of nodes with available pods: 2
Jan 30 14:53:15.306: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9849, will wait for the garbage collector to delete the pods
Jan 30 14:53:16.403: INFO: Deleting DaemonSet.extensions daemon-set took: 14.172006ms
Jan 30 14:53:16.704: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.587044ms
Jan 30 14:53:27.916: INFO: Number of nodes with available pods: 0
Jan 30 14:53:27.917: INFO: Number of running nodes: 0, number of available pods: 0
Jan 30 14:53:27.927: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9849/daemonsets","resourceVersion":"22453416"},"items":null}

Jan 30 14:53:27.931: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9849/pods","resourceVersion":"22453416"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:53:27.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9849" for this suite.
Jan 30 14:53:33.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:53:34.085: INFO: namespace daemonsets-9849 deletion completed in 6.129839006s

• [SLOW TEST:30.014 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:53:34.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 30 14:53:34.276: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 30 14:53:39.285: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 30 14:53:41.293: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 30 14:53:41.396: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-5529,SelfLink:/apis/apps/v1/namespaces/deployment-5529/deployments/test-cleanup-deployment,UID:85e509ce-1cfc-480c-bbcf-875b444e844c,ResourceVersion:22453477,Generation:1,CreationTimestamp:2020-01-30 14:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 30 14:53:41.433: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-5529,SelfLink:/apis/apps/v1/namespaces/deployment-5529/replicasets/test-cleanup-deployment-55bbcbc84c,UID:9b986ed8-b5c5-4cc0-a974-9fed92c331a4,ResourceVersion:22453485,Generation:1,CreationTimestamp:2020-01-30 14:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 85e509ce-1cfc-480c-bbcf-875b444e844c 0xc000c63cb7 0xc000c63cb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 30 14:53:41.433: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan 30 14:53:41.434: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-5529,SelfLink:/apis/apps/v1/namespaces/deployment-5529/replicasets/test-cleanup-controller,UID:93fb4466-4db6-4557-9d63-b1825f105756,ResourceVersion:22453478,Generation:1,CreationTimestamp:2020-01-30 14:53:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 85e509ce-1cfc-480c-bbcf-875b444e844c 0xc000c63be7 0xc000c63be8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 30 14:53:41.456: INFO: Pod "test-cleanup-controller-d52q9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-d52q9,GenerateName:test-cleanup-controller-,Namespace:deployment-5529,SelfLink:/api/v1/namespaces/deployment-5529/pods/test-cleanup-controller-d52q9,UID:7de8d23c-fa7b-4f37-b92d-4447c902cfd7,ResourceVersion:22453473,Generation:0,CreationTimestamp:2020-01-30 14:53:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 93fb4466-4db6-4557-9d63-b1825f105756 0xc002e2d1c7 0xc002e2d1c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rttfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rttfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rttfk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002e2d240} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002e2d260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 14:53:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 14:53:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 14:53:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 14:53:34 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-30 14:53:34 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-30 14:53:40 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d78ecaa70b8e9ceb234f88675854fcb6f12f0613f1649ea4d668befba69a29a2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 30 14:53:41.457: INFO: Pod "test-cleanup-deployment-55bbcbc84c-kfhfb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-kfhfb,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-5529,SelfLink:/api/v1/namespaces/deployment-5529/pods/test-cleanup-deployment-55bbcbc84c-kfhfb,UID:d97c3996-6aad-46a7-8683-174f9cc7747d,ResourceVersion:22453483,Generation:0,CreationTimestamp:2020-01-30 14:53:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 9b986ed8-b5c5-4cc0-a974-9fed92c331a4 0xc002e2d347 0xc002e2d348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rttfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rttfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-rttfk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002e2d3c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002e2d3e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 14:53:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:53:41.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5529" for this suite.
Jan 30 14:53:47.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:53:47.683: INFO: namespace deployment-5529 deletion completed in 6.214752277s

• [SLOW TEST:13.598 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:53:47.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 30 14:54:02.439: INFO: Successfully updated pod "pod-update-5547fc20-3f9b-4fa5-9531-0f0c4209724d"
STEP: verifying the updated pod is in kubernetes
Jan 30 14:54:02.511: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:54:02.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6865" for this suite.
Jan 30 14:54:24.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:54:24.665: INFO: namespace pods-6865 deletion completed in 22.146025296s

• [SLOW TEST:36.981 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:54:24.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 30 14:54:35.352: INFO: Successfully updated pod "annotationupdate78bf7972-36bf-4504-ba49-3c16c7591042"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:54:37.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7008" for this suite.
Jan 30 14:54:59.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:54:59.779: INFO: namespace projected-7008 deletion completed in 22.198819975s

• [SLOW TEST:35.114 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:54:59.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-197de27c-fec0-475f-ad80-4890a54d4e65
STEP: Creating a pod to test consume configMaps
Jan 30 14:54:59.885: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-151b041d-a336-44e9-b1d9-ff4e9b5ff0b7" in namespace "projected-9648" to be "success or failure"
Jan 30 14:54:59.892: INFO: Pod "pod-projected-configmaps-151b041d-a336-44e9-b1d9-ff4e9b5ff0b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.664084ms
Jan 30 14:55:01.910: INFO: Pod "pod-projected-configmaps-151b041d-a336-44e9-b1d9-ff4e9b5ff0b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024126929s
Jan 30 14:55:04.056: INFO: Pod "pod-projected-configmaps-151b041d-a336-44e9-b1d9-ff4e9b5ff0b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170289412s
Jan 30 14:55:06.063: INFO: Pod "pod-projected-configmaps-151b041d-a336-44e9-b1d9-ff4e9b5ff0b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177695079s
Jan 30 14:55:08.072: INFO: Pod "pod-projected-configmaps-151b041d-a336-44e9-b1d9-ff4e9b5ff0b7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186469622s
Jan 30 14:55:10.096: INFO: Pod "pod-projected-configmaps-151b041d-a336-44e9-b1d9-ff4e9b5ff0b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.210883204s
STEP: Saw pod success
Jan 30 14:55:10.097: INFO: Pod "pod-projected-configmaps-151b041d-a336-44e9-b1d9-ff4e9b5ff0b7" satisfied condition "success or failure"
Jan 30 14:55:10.108: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-151b041d-a336-44e9-b1d9-ff4e9b5ff0b7 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 30 14:55:10.244: INFO: Waiting for pod pod-projected-configmaps-151b041d-a336-44e9-b1d9-ff4e9b5ff0b7 to disappear
Jan 30 14:55:10.262: INFO: Pod pod-projected-configmaps-151b041d-a336-44e9-b1d9-ff4e9b5ff0b7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:55:10.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9648" for this suite.
Jan 30 14:55:16.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:55:16.439: INFO: namespace projected-9648 deletion completed in 6.162957952s

• [SLOW TEST:16.660 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:55:16.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Jan 30 14:55:17.105: INFO: created pod pod-service-account-defaultsa
Jan 30 14:55:17.105: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 30 14:55:17.141: INFO: created pod pod-service-account-mountsa
Jan 30 14:55:17.141: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 30 14:55:17.163: INFO: created pod pod-service-account-nomountsa
Jan 30 14:55:17.163: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 30 14:55:17.289: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 30 14:55:17.290: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 30 14:55:17.373: INFO: created pod pod-service-account-mountsa-mountspec
Jan 30 14:55:17.373: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 30 14:55:19.195: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 30 14:55:19.196: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 30 14:55:19.387: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 30 14:55:19.387: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 30 14:55:19.927: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 30 14:55:19.927: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 30 14:55:19.960: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 30 14:55:19.960: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:55:19.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2036" for this suite.
Jan 30 14:55:53.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:55:53.501: INFO: namespace svcaccounts-2036 deletion completed in 33.375499677s

• [SLOW TEST:37.062 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:55:53.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 14:55:53.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4713" for this suite.
Jan 30 14:56:15.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 14:56:15.895: INFO: namespace pods-4713 deletion completed in 22.187829915s

• [SLOW TEST:22.393 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 14:56:15.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-96ed002e-c2bb-4abf-90b0-fdf1dd65bede in namespace container-probe-5181
Jan 30 14:56:26.034: INFO: Started pod test-webserver-96ed002e-c2bb-4abf-90b0-fdf1dd65bede in namespace container-probe-5181
STEP: checking the pod's current state and verifying that restartCount is present
Jan 30 14:56:26.043: INFO: Initial restart count of pod test-webserver-96ed002e-c2bb-4abf-90b0-fdf1dd65bede is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:00:26.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5181" for this suite.
Jan 30 15:00:32.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:00:32.383: INFO: namespace container-probe-5181 deletion completed in 6.162174174s

• [SLOW TEST:256.488 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:00:32.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 30 15:00:32.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7425'
Jan 30 15:00:35.040: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 30 15:00:35.040: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan 30 15:00:35.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-7425'
Jan 30 15:00:35.238: INFO: stderr: ""
Jan 30 15:00:35.238: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:00:35.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7425" for this suite.
Jan 30 15:00:41.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:00:41.440: INFO: namespace kubectl-7425 deletion completed in 6.19318353s

• [SLOW TEST:9.056 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:00:41.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 30 15:00:41.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:00:51.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8414" for this suite.
Jan 30 15:01:37.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:01:37.922: INFO: namespace pods-8414 deletion completed in 46.258826065s

• [SLOW TEST:56.483 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:01:37.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-22e58d0e-cce8-430e-a37f-66cd7f310c42
STEP: Creating configMap with name cm-test-opt-upd-6d111cbf-44bb-4d58-8aaf-82f3eb7becb9
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-22e58d0e-cce8-430e-a37f-66cd7f310c42
STEP: Updating configmap cm-test-opt-upd-6d111cbf-44bb-4d58-8aaf-82f3eb7becb9
STEP: Creating configMap with name cm-test-opt-create-a19daf9b-a522-48af-9bd7-df67d16d7f5b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:01:56.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5640" for this suite.
Jan 30 15:02:18.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:02:18.649: INFO: namespace projected-5640 deletion completed in 22.154253281s

• [SLOW TEST:40.726 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:02:18.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2252
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-2252
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2252
Jan 30 15:02:18.833: INFO: Found 0 stateful pods, waiting for 1
Jan 30 15:02:28.844: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 30 15:02:28.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2252 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 30 15:02:29.503: INFO: stderr: "I0130 15:02:29.082960    3854 log.go:172] (0xc000116dc0) (0xc000334820) Create stream\nI0130 15:02:29.083342    3854 log.go:172] (0xc000116dc0) (0xc000334820) Stream added, broadcasting: 1\nI0130 15:02:29.113349    3854 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0130 15:02:29.113389    3854 log.go:172] (0xc000116dc0) (0xc0006be3c0) Create stream\nI0130 15:02:29.113398    3854 log.go:172] (0xc000116dc0) (0xc0006be3c0) Stream added, broadcasting: 3\nI0130 15:02:29.114768    3854 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0130 15:02:29.114793    3854 log.go:172] (0xc000116dc0) (0xc000a62000) Create stream\nI0130 15:02:29.114802    3854 log.go:172] (0xc000116dc0) (0xc000a62000) Stream added, broadcasting: 5\nI0130 15:02:29.115691    3854 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0130 15:02:29.263958    3854 log.go:172] (0xc000116dc0) Data frame received for 5\nI0130 15:02:29.264030    3854 log.go:172] (0xc000a62000) (5) Data frame handling\nI0130 15:02:29.264064    3854 log.go:172] (0xc000a62000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0130 15:02:29.332653    3854 log.go:172] (0xc000116dc0) Data frame received for 3\nI0130 15:02:29.332788    3854 log.go:172] (0xc0006be3c0) (3) Data frame handling\nI0130 15:02:29.332855    3854 log.go:172] (0xc0006be3c0) (3) Data frame sent\nI0130 15:02:29.482238    3854 log.go:172] (0xc000116dc0) Data frame received for 1\nI0130 15:02:29.482441    3854 log.go:172] (0xc000116dc0) (0xc000a62000) Stream removed, broadcasting: 5\nI0130 15:02:29.483009    3854 log.go:172] (0xc000116dc0) (0xc0006be3c0) Stream removed, broadcasting: 3\nI0130 15:02:29.483313    3854 log.go:172] (0xc000334820) (1) Data frame handling\nI0130 15:02:29.483376    3854 log.go:172] (0xc000334820) (1) Data frame sent\nI0130 15:02:29.483409    3854 log.go:172] (0xc000116dc0) (0xc000334820) Stream removed, broadcasting: 1\nI0130 15:02:29.483444    3854 log.go:172] (0xc000116dc0) Go away received\nI0130 15:02:29.484957    3854 log.go:172] (0xc000116dc0) (0xc000334820) Stream removed, broadcasting: 1\nI0130 15:02:29.484976    3854 log.go:172] (0xc000116dc0) (0xc0006be3c0) Stream removed, broadcasting: 3\nI0130 15:02:29.484980    3854 log.go:172] (0xc000116dc0) (0xc000a62000) Stream removed, broadcasting: 5\n"
Jan 30 15:02:29.504: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 30 15:02:29.504: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 30 15:02:29.513: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 30 15:02:39.525: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 15:02:39.525: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 15:02:39.564: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999995893s
Jan 30 15:02:40.632: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988116631s
Jan 30 15:02:41.646: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.920513615s
Jan 30 15:02:42.667: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.906210437s
Jan 30 15:02:43.677: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.885289823s
Jan 30 15:02:44.686: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.875290258s
Jan 30 15:02:45.700: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.866612214s
Jan 30 15:02:46.713: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.851683374s
Jan 30 15:02:47.739: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.839104579s
Jan 30 15:02:48.750: INFO: Verifying statefulset ss doesn't scale past 1 for another 811.995571ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2252
Jan 30 15:02:49.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2252 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 15:02:50.293: INFO: stderr: "I0130 15:02:49.998393    3876 log.go:172] (0xc000984370) (0xc00066c6e0) Create stream\nI0130 15:02:49.998701    3876 log.go:172] (0xc000984370) (0xc00066c6e0) Stream added, broadcasting: 1\nI0130 15:02:50.004575    3876 log.go:172] (0xc000984370) Reply frame received for 1\nI0130 15:02:50.004608    3876 log.go:172] (0xc000984370) (0xc0008c2000) Create stream\nI0130 15:02:50.004623    3876 log.go:172] (0xc000984370) (0xc0008c2000) Stream added, broadcasting: 3\nI0130 15:02:50.006290    3876 log.go:172] (0xc000984370) Reply frame received for 3\nI0130 15:02:50.006312    3876 log.go:172] (0xc000984370) (0xc00066c780) Create stream\nI0130 15:02:50.006319    3876 log.go:172] (0xc000984370) (0xc00066c780) Stream added, broadcasting: 5\nI0130 15:02:50.007771    3876 log.go:172] (0xc000984370) Reply frame received for 5\nI0130 15:02:50.118454    3876 log.go:172] (0xc000984370) Data frame received for 5\nI0130 15:02:50.118621    3876 log.go:172] (0xc00066c780) (5) Data frame handling\nI0130 15:02:50.118657    3876 log.go:172] (0xc00066c780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0130 15:02:50.121465    3876 log.go:172] (0xc000984370) Data frame received for 3\nI0130 15:02:50.121539    3876 log.go:172] (0xc0008c2000) (3) Data frame handling\nI0130 15:02:50.121574    3876 log.go:172] (0xc0008c2000) (3) Data frame sent\nI0130 15:02:50.278632    3876 log.go:172] (0xc000984370) (0xc00066c780) Stream removed, broadcasting: 5\nI0130 15:02:50.279162    3876 log.go:172] (0xc000984370) Data frame received for 1\nI0130 15:02:50.279417    3876 log.go:172] (0xc000984370) (0xc0008c2000) Stream removed, broadcasting: 3\nI0130 15:02:50.279599    3876 log.go:172] (0xc00066c6e0) (1) Data frame handling\nI0130 15:02:50.279679    3876 log.go:172] (0xc00066c6e0) (1) Data frame sent\nI0130 15:02:50.279714    3876 log.go:172] (0xc000984370) (0xc00066c6e0) Stream removed, broadcasting: 1\nI0130 15:02:50.279758    3876 log.go:172] (0xc000984370) Go away received\nI0130 15:02:50.281525    3876 log.go:172] (0xc000984370) (0xc00066c6e0) Stream removed, broadcasting: 1\nI0130 15:02:50.281541    3876 log.go:172] (0xc000984370) (0xc0008c2000) Stream removed, broadcasting: 3\nI0130 15:02:50.281547    3876 log.go:172] (0xc000984370) (0xc00066c780) Stream removed, broadcasting: 5\n"
Jan 30 15:02:50.293: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 30 15:02:50.293: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 30 15:02:50.306: INFO: Found 1 stateful pods, waiting for 3
Jan 30 15:03:00.324: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 15:03:00.324: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 15:03:00.324: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 30 15:03:10.318: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 15:03:10.318: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 15:03:10.318: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 30 15:03:10.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2252 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 30 15:03:11.032: INFO: stderr: "I0130 15:03:10.732726    3895 log.go:172] (0xc0008de2c0) (0xc0008cc6e0) Create stream\nI0130 15:03:10.733213    3895 log.go:172] (0xc0008de2c0) (0xc0008cc6e0) Stream added, broadcasting: 1\nI0130 15:03:10.740706    3895 log.go:172] (0xc0008de2c0) Reply frame received for 1\nI0130 15:03:10.740780    3895 log.go:172] (0xc0008de2c0) (0xc0008cc780) Create stream\nI0130 15:03:10.740789    3895 log.go:172] (0xc0008de2c0) (0xc0008cc780) Stream added, broadcasting: 3\nI0130 15:03:10.742025    3895 log.go:172] (0xc0008de2c0) Reply frame received for 3\nI0130 15:03:10.742053    3895 log.go:172] (0xc0008de2c0) (0xc000932000) Create stream\nI0130 15:03:10.742061    3895 log.go:172] (0xc0008de2c0) (0xc000932000) Stream added, broadcasting: 5\nI0130 15:03:10.743237    3895 log.go:172] (0xc0008de2c0) Reply frame received for 5\nI0130 15:03:10.887203    3895 log.go:172] (0xc0008de2c0) Data frame received for 3\nI0130 15:03:10.887375    3895 log.go:172] (0xc0008cc780) (3) Data frame handling\nI0130 15:03:10.887417    3895 log.go:172] (0xc0008cc780) (3) Data frame sent\nI0130 15:03:10.887486    3895 log.go:172] (0xc0008de2c0) Data frame received for 5\nI0130 15:03:10.887520    3895 log.go:172] (0xc000932000) (5) Data frame handling\nI0130 15:03:10.887547    3895 log.go:172] (0xc000932000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0130 15:03:11.019599    3895 log.go:172] (0xc0008de2c0) Data frame received for 1\nI0130 15:03:11.019755    3895 log.go:172] (0xc0008de2c0) (0xc000932000) Stream removed, broadcasting: 5\nI0130 15:03:11.019841    3895 log.go:172] (0xc0008cc6e0) (1) Data frame handling\nI0130 15:03:11.019865    3895 log.go:172] (0xc0008cc6e0) (1) Data frame sent\nI0130 15:03:11.020028    3895 log.go:172] (0xc0008de2c0) (0xc0008cc780) Stream removed, broadcasting: 3\nI0130 15:03:11.020056    3895 log.go:172] (0xc0008de2c0) (0xc0008cc6e0) Stream removed, broadcasting: 1\nI0130 15:03:11.020072    3895 log.go:172] (0xc0008de2c0) Go away received\nI0130 15:03:11.021350    3895 log.go:172] (0xc0008de2c0) (0xc0008cc6e0) Stream removed, broadcasting: 1\nI0130 15:03:11.021365    3895 log.go:172] (0xc0008de2c0) (0xc0008cc780) Stream removed, broadcasting: 3\nI0130 15:03:11.021373    3895 log.go:172] (0xc0008de2c0) (0xc000932000) Stream removed, broadcasting: 5\n"
Jan 30 15:03:11.032: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 30 15:03:11.032: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 30 15:03:11.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2252 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 30 15:03:11.349: INFO: stderr: "I0130 15:03:11.172134    3915 log.go:172] (0xc0009962c0) (0xc0008886e0) Create stream\nI0130 15:03:11.172364    3915 log.go:172] (0xc0009962c0) (0xc0008886e0) Stream added, broadcasting: 1\nI0130 15:03:11.175418    3915 log.go:172] (0xc0009962c0) Reply frame received for 1\nI0130 15:03:11.175455    3915 log.go:172] (0xc0009962c0) (0xc00061c320) Create stream\nI0130 15:03:11.175465    3915 log.go:172] (0xc0009962c0) (0xc00061c320) Stream added, broadcasting: 3\nI0130 15:03:11.176217    3915 log.go:172] (0xc0009962c0) Reply frame received for 3\nI0130 15:03:11.176239    3915 log.go:172] (0xc0009962c0) (0xc00040a000) Create stream\nI0130 15:03:11.176246    3915 log.go:172] (0xc0009962c0) (0xc00040a000) Stream added, broadcasting: 5\nI0130 15:03:11.177006    3915 log.go:172] (0xc0009962c0) Reply frame received for 5\nI0130 15:03:11.244212    3915 log.go:172] (0xc0009962c0) Data frame received for 5\nI0130 15:03:11.244262    3915 log.go:172] (0xc00040a000) (5) Data frame handling\nI0130 15:03:11.244279    3915 log.go:172] (0xc00040a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0130 15:03:11.263705    3915 log.go:172] (0xc0009962c0) Data frame received for 3\nI0130 15:03:11.263752    3915 log.go:172] (0xc00061c320) (3) Data frame handling\nI0130 15:03:11.263781    3915 log.go:172] (0xc00061c320) (3) Data frame sent\nI0130 15:03:11.337056    3915 log.go:172] (0xc0009962c0) (0xc00040a000) Stream removed, broadcasting: 5\nI0130 15:03:11.337515    3915 log.go:172] (0xc0009962c0) Data frame received for 1\nI0130 15:03:11.337565    3915 log.go:172] (0xc0009962c0) (0xc00061c320) Stream removed, broadcasting: 3\nI0130 15:03:11.337608    3915 log.go:172] (0xc0008886e0) (1) Data frame handling\nI0130 15:03:11.337635    3915 log.go:172] (0xc0008886e0) (1) Data frame sent\nI0130 15:03:11.337661    3915 log.go:172] (0xc0009962c0) (0xc0008886e0) Stream removed, broadcasting: 1\nI0130 15:03:11.338078    3915 log.go:172] (0xc0009962c0) Go away received\nI0130 15:03:11.339310    3915 log.go:172] (0xc0009962c0) (0xc0008886e0) Stream removed, broadcasting: 1\nI0130 15:03:11.339335    3915 log.go:172] (0xc0009962c0) (0xc00061c320) Stream removed, broadcasting: 3\nI0130 15:03:11.339348    3915 log.go:172] (0xc0009962c0) (0xc00040a000) Stream removed, broadcasting: 5\n"
Jan 30 15:03:11.349: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 30 15:03:11.349: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 30 15:03:11.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2252 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 30 15:03:12.085: INFO: stderr: "I0130 15:03:11.568863    3934 log.go:172] (0xc000116dc0) (0xc0006dc6e0) Create stream\nI0130 15:03:11.569194    3934 log.go:172] (0xc000116dc0) (0xc0006dc6e0) Stream added, broadcasting: 1\nI0130 15:03:11.591747    3934 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0130 15:03:11.591880    3934 log.go:172] (0xc000116dc0) (0xc0006dc000) Create stream\nI0130 15:03:11.591903    3934 log.go:172] (0xc000116dc0) (0xc0006dc000) Stream added, broadcasting: 3\nI0130 15:03:11.593517    3934 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0130 15:03:11.593665    3934 log.go:172] (0xc000116dc0) (0xc0006dc0a0) Create stream\nI0130 15:03:11.593688    3934 log.go:172] (0xc000116dc0) (0xc0006dc0a0) Stream added, broadcasting: 5\nI0130 15:03:11.596552    3934 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0130 15:03:11.735775    3934 log.go:172] (0xc000116dc0) Data frame received for 5\nI0130 15:03:11.735939    3934 log.go:172] (0xc0006dc0a0) (5) Data frame handling\nI0130 15:03:11.735990    3934 log.go:172] (0xc0006dc0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0130 15:03:11.802320    3934 log.go:172] (0xc000116dc0) Data frame received for 3\nI0130 15:03:11.802644    3934 log.go:172] (0xc0006dc000) (3) Data frame handling\nI0130 15:03:11.802762    3934 log.go:172] (0xc0006dc000) (3) Data frame sent\nI0130 15:03:12.058358    3934 log.go:172] (0xc000116dc0) Data frame received for 1\nI0130 15:03:12.059055    3934 log.go:172] (0xc000116dc0) (0xc0006dc000) Stream removed, broadcasting: 3\nI0130 15:03:12.059250    3934 log.go:172] (0xc0006dc6e0) (1) Data frame handling\nI0130 15:03:12.059505    3934 log.go:172] (0xc0006dc6e0) (1) Data frame sent\nI0130 15:03:12.060110    3934 log.go:172] (0xc000116dc0) (0xc0006dc0a0) Stream removed, broadcasting: 5\nI0130 15:03:12.061135    3934 log.go:172] (0xc000116dc0) (0xc0006dc6e0) Stream removed, broadcasting: 1\nI0130 15:03:12.061294    3934 log.go:172] (0xc000116dc0) Go away received\nI0130 15:03:12.064539    3934 log.go:172] (0xc000116dc0) (0xc0006dc6e0) Stream removed, broadcasting: 1\nI0130 15:03:12.064579    3934 log.go:172] (0xc000116dc0) (0xc0006dc000) Stream removed, broadcasting: 3\nI0130 15:03:12.064585    3934 log.go:172] (0xc000116dc0) (0xc0006dc0a0) Stream removed, broadcasting: 5\n"
Jan 30 15:03:12.085: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 30 15:03:12.085: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 30 15:03:12.085: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 15:03:12.095: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 30 15:03:22.142: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 15:03:22.142: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 15:03:22.142: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 15:03:22.186: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999559s
Jan 30 15:03:23.198: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991998599s
Jan 30 15:03:24.216: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.980660005s
Jan 30 15:03:25.225: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.962409522s
Jan 30 15:03:26.236: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.953840133s
Jan 30 15:03:27.249: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.942425372s
Jan 30 15:03:28.275: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.929528892s
Jan 30 15:03:29.295: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.902684091s
Jan 30 15:03:30.305: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.883529138s
Jan 30 15:03:31.317: INFO: Verifying statefulset ss doesn't scale past 3 for another 872.954769ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2252
Jan 30 15:03:32.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2252 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 15:03:33.032: INFO: stderr: "I0130 15:03:32.668869    3956 log.go:172] (0xc00097a420) (0xc000848aa0) Create stream\nI0130 15:03:32.669291    3956 log.go:172] (0xc00097a420) (0xc000848aa0) Stream added, broadcasting: 1\nI0130 15:03:32.690147    3956 log.go:172] (0xc00097a420) Reply frame received for 1\nI0130 15:03:32.690258    3956 log.go:172] (0xc00097a420) (0xc000964000) Create stream\nI0130 15:03:32.690291    3956 log.go:172] (0xc00097a420) (0xc000964000) Stream added, broadcasting: 3\nI0130 15:03:32.695518    3956 log.go:172] (0xc00097a420) Reply frame received for 3\nI0130 15:03:32.695741    3956 log.go:172] (0xc00097a420) (0xc000848000) Create stream\nI0130 15:03:32.695779    3956 log.go:172] (0xc00097a420) (0xc000848000) Stream added, broadcasting: 5\nI0130 15:03:32.698082    3956 log.go:172] (0xc00097a420) Reply frame received for 5\nI0130 15:03:32.804562    3956 log.go:172] (0xc00097a420) Data frame received for 5\nI0130 15:03:32.804734    3956 log.go:172] (0xc000848000) (5) Data frame handling\nI0130 15:03:32.804794    3956 log.go:172] (0xc000848000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0130 15:03:32.805197    3956 log.go:172] (0xc00097a420) Data frame received for 3\nI0130 15:03:32.805248    3956 log.go:172] (0xc000964000) (3) Data frame handling\nI0130 15:03:32.805298    3956 log.go:172] (0xc000964000) (3) Data frame sent\nI0130 15:03:33.008436    3956 log.go:172] (0xc00097a420) (0xc000964000) Stream removed, broadcasting: 3\nI0130 15:03:33.008808    3956 log.go:172] (0xc00097a420) Data frame received for 1\nI0130 15:03:33.008849    3956 log.go:172] (0xc000848aa0) (1) Data frame handling\nI0130 15:03:33.008910    3956 log.go:172] (0xc000848aa0) (1) Data frame sent\nI0130 15:03:33.008957    3956 log.go:172] (0xc00097a420) (0xc000848aa0) Stream removed, broadcasting: 1\nI0130 15:03:33.009004    3956 log.go:172] (0xc00097a420) (0xc000848000) Stream removed, broadcasting: 5\nI0130 15:03:33.009781    3956 log.go:172] (0xc00097a420) Go away received\nI0130 15:03:33.012063    3956 log.go:172] (0xc00097a420) (0xc000848aa0) Stream removed, broadcasting: 1\nI0130 15:03:33.012123    3956 log.go:172] (0xc00097a420) (0xc000964000) Stream removed, broadcasting: 3\nI0130 15:03:33.012135    3956 log.go:172] (0xc00097a420) (0xc000848000) Stream removed, broadcasting: 5\n"
Jan 30 15:03:33.032: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 30 15:03:33.033: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 30 15:03:33.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2252 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 15:03:33.433: INFO: stderr: "I0130 15:03:33.213162    3977 log.go:172] (0xc000a28370) (0xc000818640) Create stream\nI0130 15:03:33.213606    3977 log.go:172] (0xc000a28370) (0xc000818640) Stream added, broadcasting: 1\nI0130 15:03:33.224461    3977 log.go:172] (0xc000a28370) Reply frame received for 1\nI0130 15:03:33.224613    3977 log.go:172] (0xc000a28370) (0xc000a00000) Create stream\nI0130 15:03:33.224642    3977 log.go:172] (0xc000a28370) (0xc000a00000) Stream added, broadcasting: 3\nI0130 15:03:33.226152    3977 log.go:172] (0xc000a28370) Reply frame received for 3\nI0130 15:03:33.226189    3977 log.go:172] (0xc000a28370) (0xc0008186e0) Create stream\nI0130 15:03:33.226206    3977 log.go:172] (0xc000a28370) (0xc0008186e0) Stream added, broadcasting: 5\nI0130 15:03:33.227800    3977 log.go:172] (0xc000a28370) Reply frame received for 5\nI0130 15:03:33.345213    3977 log.go:172] (0xc000a28370) Data frame received for 5\nI0130 15:03:33.345615    3977 log.go:172] (0xc0008186e0) (5) Data frame handling\nI0130 15:03:33.345731    3977 log.go:172] (0xc000a28370) Data frame received for 3\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0130 15:03:33.345813    3977 log.go:172] (0xc000a00000) (3) Data frame handling\nI0130 15:03:33.345879    3977 log.go:172] (0xc000a00000) (3) Data frame sent\nI0130 15:03:33.346054    3977 log.go:172] (0xc0008186e0) (5) Data frame sent\nI0130 15:03:33.425149    3977 log.go:172] (0xc000a28370) Data frame received for 1\nI0130 15:03:33.425250    3977 log.go:172] (0xc000a28370) (0xc000a00000) Stream removed, broadcasting: 3\nI0130 15:03:33.425400    3977 log.go:172] (0xc000818640) (1) Data frame handling\nI0130 15:03:33.425423    3977 log.go:172] (0xc000818640) (1) Data frame sent\nI0130 15:03:33.425457    3977 log.go:172] (0xc000a28370) (0xc000818640) Stream removed, broadcasting: 1\nI0130 15:03:33.426102    3977 log.go:172] (0xc000a28370) (0xc0008186e0) Stream removed, broadcasting: 5\nI0130 15:03:33.426133    3977 log.go:172] (0xc000a28370) (0xc000818640) Stream removed, broadcasting: 1\nI0130 15:03:33.426140    3977 log.go:172] (0xc000a28370) (0xc000a00000) Stream removed, broadcasting: 3\nI0130 15:03:33.426146    3977 log.go:172] (0xc000a28370) (0xc0008186e0) Stream removed, broadcasting: 5\nI0130 15:03:33.426438    3977 log.go:172] (0xc000a28370) Go away received\n"
Jan 30 15:03:33.433: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 30 15:03:33.433: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 30 15:03:33.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2252 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 30 15:03:34.295: INFO: stderr: "I0130 15:03:33.671234    3999 log.go:172] (0xc00060c0b0) (0xc00090c6e0) Create stream\nI0130 15:03:33.671716    3999 log.go:172] (0xc00060c0b0) (0xc00090c6e0) Stream added, broadcasting: 1\nI0130 15:03:33.678672    3999 log.go:172] (0xc00060c0b0) Reply frame received for 1\nI0130 15:03:33.678830    3999 log.go:172] (0xc00060c0b0) (0xc000680280) Create stream\nI0130 15:03:33.678877    3999 log.go:172] (0xc00060c0b0) (0xc000680280) Stream added, broadcasting: 3\nI0130 15:03:33.680403    3999 log.go:172] (0xc00060c0b0) Reply frame received for 3\nI0130 15:03:33.680422    3999 log.go:172] (0xc00060c0b0) (0xc00090c780) Create stream\nI0130 15:03:33.680427    3999 log.go:172] (0xc00060c0b0) (0xc00090c780) Stream added, broadcasting: 5\nI0130 15:03:33.682887    3999 log.go:172] (0xc00060c0b0) Reply frame received for 5\nI0130 15:03:33.917386    3999 log.go:172] (0xc00060c0b0) Data frame received for 3\nI0130 15:03:33.917648    3999 log.go:172] (0xc000680280) (3) Data frame handling\nI0130 15:03:33.917692    3999 log.go:172] (0xc000680280) (3) Data frame sent\nI0130 15:03:33.917819    3999 log.go:172] (0xc00060c0b0) Data frame received for 5\nI0130 15:03:33.917845    3999 log.go:172] (0xc00090c780) (5) Data frame handling\nI0130 15:03:33.917860    3999 log.go:172] (0xc00090c780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0130 15:03:34.269562    3999 log.go:172] (0xc00060c0b0) (0xc000680280) Stream removed, broadcasting: 3\nI0130 15:03:34.269880    3999 log.go:172] (0xc00060c0b0) Data frame received for 1\nI0130 15:03:34.269913    3999 log.go:172] (0xc00090c6e0) (1) Data frame handling\nI0130 15:03:34.269946    3999 log.go:172] (0xc00090c6e0) (1) Data frame sent\nI0130 15:03:34.270151    3999 log.go:172] (0xc00060c0b0) (0xc00090c6e0) Stream removed, broadcasting: 1\nI0130 15:03:34.271157    3999 log.go:172] (0xc00060c0b0) (0xc00090c780) Stream removed, broadcasting: 5\nI0130 15:03:34.271667    3999 log.go:172] (0xc00060c0b0) Go away received\nI0130 15:03:34.272521    3999 log.go:172] (0xc00060c0b0) (0xc00090c6e0) Stream removed, broadcasting: 1\nI0130 15:03:34.272557    3999 log.go:172] (0xc00060c0b0) (0xc000680280) Stream removed, broadcasting: 3\nI0130 15:03:34.272571    3999 log.go:172] (0xc00060c0b0) (0xc00090c780) Stream removed, broadcasting: 5\n"
Jan 30 15:03:34.295: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 30 15:03:34.295: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 30 15:03:34.295: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 30 15:03:54.344: INFO: Deleting all statefulset in ns statefulset-2252
Jan 30 15:03:54.348: INFO: Scaling statefulset ss to 0
Jan 30 15:03:54.365: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 15:03:54.369: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:03:54.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2252" for this suite.
Jan 30 15:04:00.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:04:00.577: INFO: namespace statefulset-2252 deletion completed in 6.168601526s

• [SLOW TEST:101.927 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:04:00.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-dde5fbbb-fd94-48ff-b1df-d980c64d4030
STEP: Creating a pod to test consume secrets
Jan 30 15:04:00.752: INFO: Waiting up to 5m0s for pod "pod-secrets-73f73eb1-18c6-4374-8d7a-39ed904bcef3" in namespace "secrets-5715" to be "success or failure"
Jan 30 15:04:00.758: INFO: Pod "pod-secrets-73f73eb1-18c6-4374-8d7a-39ed904bcef3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.364939ms
Jan 30 15:04:02.766: INFO: Pod "pod-secrets-73f73eb1-18c6-4374-8d7a-39ed904bcef3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014329833s
Jan 30 15:04:04.775: INFO: Pod "pod-secrets-73f73eb1-18c6-4374-8d7a-39ed904bcef3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023046692s
Jan 30 15:04:06.781: INFO: Pod "pod-secrets-73f73eb1-18c6-4374-8d7a-39ed904bcef3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028775409s
Jan 30 15:04:08.788: INFO: Pod "pod-secrets-73f73eb1-18c6-4374-8d7a-39ed904bcef3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035621624s
Jan 30 15:04:10.795: INFO: Pod "pod-secrets-73f73eb1-18c6-4374-8d7a-39ed904bcef3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.04282744s
STEP: Saw pod success
Jan 30 15:04:10.795: INFO: Pod "pod-secrets-73f73eb1-18c6-4374-8d7a-39ed904bcef3" satisfied condition "success or failure"
Jan 30 15:04:10.798: INFO: Trying to get logs from node iruya-node pod pod-secrets-73f73eb1-18c6-4374-8d7a-39ed904bcef3 container secret-volume-test: 
STEP: delete the pod
Jan 30 15:04:10.933: INFO: Waiting for pod pod-secrets-73f73eb1-18c6-4374-8d7a-39ed904bcef3 to disappear
Jan 30 15:04:10.950: INFO: Pod pod-secrets-73f73eb1-18c6-4374-8d7a-39ed904bcef3 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:04:10.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5715" for this suite.
Jan 30 15:04:16.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:04:17.106: INFO: namespace secrets-5715 deletion completed in 6.150673351s

• [SLOW TEST:16.528 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:04:17.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 30 15:04:25.737: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3f4f85e8-1b3c-4493-9283-ddf1d2ed5ba1"
Jan 30 15:04:25.738: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3f4f85e8-1b3c-4493-9283-ddf1d2ed5ba1" in namespace "pods-6322" to be "terminated due to deadline exceeded"
Jan 30 15:04:25.795: INFO: Pod "pod-update-activedeadlineseconds-3f4f85e8-1b3c-4493-9283-ddf1d2ed5ba1": Phase="Running", Reason="", readiness=true. Elapsed: 57.620944ms
Jan 30 15:04:27.810: INFO: Pod "pod-update-activedeadlineseconds-3f4f85e8-1b3c-4493-9283-ddf1d2ed5ba1": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.072308924s
Jan 30 15:04:27.810: INFO: Pod "pod-update-activedeadlineseconds-3f4f85e8-1b3c-4493-9283-ddf1d2ed5ba1" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:04:27.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6322" for this suite.
Jan 30 15:04:33.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:04:34.030: INFO: namespace pods-6322 deletion completed in 6.214008192s

• [SLOW TEST:16.924 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:04:34.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-d59f45bd-c556-4ffa-8557-d3e7c1581024
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:04:46.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9509" for this suite.
Jan 30 15:05:08.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:05:08.482: INFO: namespace configmap-9509 deletion completed in 22.198982669s

• [SLOW TEST:34.451 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:05:08.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 30 15:05:08.714: INFO: Create a RollingUpdate DaemonSet
Jan 30 15:05:08.783: INFO: Check that daemon pods launch on every node of the cluster
Jan 30 15:05:08.806: INFO: Number of nodes with available pods: 0
Jan 30 15:05:08.806: INFO: Node iruya-node is running more than one daemon pod
Jan 30 15:05:09.834: INFO: Number of nodes with available pods: 0
Jan 30 15:05:09.835: INFO: Node iruya-node is running more than one daemon pod
Jan 30 15:05:11.119: INFO: Number of nodes with available pods: 0
Jan 30 15:05:11.119: INFO: Node iruya-node is running more than one daemon pod
Jan 30 15:05:11.840: INFO: Number of nodes with available pods: 0
Jan 30 15:05:11.840: INFO: Node iruya-node is running more than one daemon pod
Jan 30 15:05:12.833: INFO: Number of nodes with available pods: 0
Jan 30 15:05:12.833: INFO: Node iruya-node is running more than one daemon pod
Jan 30 15:05:13.833: INFO: Number of nodes with available pods: 0
Jan 30 15:05:13.833: INFO: Node iruya-node is running more than one daemon pod
Jan 30 15:05:15.414: INFO: Number of nodes with available pods: 0
Jan 30 15:05:15.415: INFO: Node iruya-node is running more than one daemon pod
Jan 30 15:05:15.827: INFO: Number of nodes with available pods: 0
Jan 30 15:05:15.827: INFO: Node iruya-node is running more than one daemon pod
Jan 30 15:05:16.822: INFO: Number of nodes with available pods: 0
Jan 30 15:05:16.822: INFO: Node iruya-node is running more than one daemon pod
Jan 30 15:05:17.838: INFO: Number of nodes with available pods: 1
Jan 30 15:05:17.839: INFO: Node iruya-node is running more than one daemon pod
Jan 30 15:05:18.826: INFO: Number of nodes with available pods: 2
Jan 30 15:05:18.826: INFO: Number of running nodes: 2, number of available pods: 2
Jan 30 15:05:18.826: INFO: Update the DaemonSet to trigger a rollout
Jan 30 15:05:18.837: INFO: Updating DaemonSet daemon-set
Jan 30 15:05:25.297: INFO: Roll back the DaemonSet before rollout is complete
Jan 30 15:05:25.320: INFO: Updating DaemonSet daemon-set
Jan 30 15:05:25.320: INFO: Make sure DaemonSet rollback is complete
Jan 30 15:05:25.517: INFO: Wrong image for pod: daemon-set-slsz8. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 30 15:05:25.518: INFO: Pod daemon-set-slsz8 is not available
Jan 30 15:05:26.563: INFO: Wrong image for pod: daemon-set-slsz8. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 30 15:05:26.564: INFO: Pod daemon-set-slsz8 is not available
Jan 30 15:05:27.560: INFO: Wrong image for pod: daemon-set-slsz8. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 30 15:05:27.560: INFO: Pod daemon-set-slsz8 is not available
Jan 30 15:05:29.471: INFO: Wrong image for pod: daemon-set-slsz8. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan 30 15:05:29.471: INFO: Pod daemon-set-slsz8 is not available
Jan 30 15:05:30.958: INFO: Pod daemon-set-h4w8q is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5780, will wait for the garbage collector to delete the pods
Jan 30 15:05:31.039: INFO: Deleting DaemonSet.extensions daemon-set took: 8.816149ms
Jan 30 15:05:31.439: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.783052ms
Jan 30 15:05:46.644: INFO: Number of nodes with available pods: 0
Jan 30 15:05:46.644: INFO: Number of running nodes: 0, number of available pods: 0
Jan 30 15:05:46.666: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5780/daemonsets","resourceVersion":"22455131"},"items":null}

Jan 30 15:05:46.669: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5780/pods","resourceVersion":"22455131"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:05:46.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5780" for this suite.
Jan 30 15:05:52.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:05:52.819: INFO: namespace daemonsets-5780 deletion completed in 6.135412876s

• [SLOW TEST:44.336 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:05:52.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 30 15:05:52.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-008f8d9c-824b-4d79-83a8-19b0c96b5db7" in namespace "projected-4926" to be "success or failure"
Jan 30 15:05:52.934: INFO: Pod "downwardapi-volume-008f8d9c-824b-4d79-83a8-19b0c96b5db7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.306749ms
Jan 30 15:05:54.947: INFO: Pod "downwardapi-volume-008f8d9c-824b-4d79-83a8-19b0c96b5db7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022200241s
Jan 30 15:05:57.022: INFO: Pod "downwardapi-volume-008f8d9c-824b-4d79-83a8-19b0c96b5db7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097535935s
Jan 30 15:05:59.029: INFO: Pod "downwardapi-volume-008f8d9c-824b-4d79-83a8-19b0c96b5db7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103675207s
Jan 30 15:06:01.034: INFO: Pod "downwardapi-volume-008f8d9c-824b-4d79-83a8-19b0c96b5db7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10947492s
Jan 30 15:06:03.123: INFO: Pod "downwardapi-volume-008f8d9c-824b-4d79-83a8-19b0c96b5db7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.197843541s
STEP: Saw pod success
Jan 30 15:06:03.123: INFO: Pod "downwardapi-volume-008f8d9c-824b-4d79-83a8-19b0c96b5db7" satisfied condition "success or failure"
Jan 30 15:06:03.148: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-008f8d9c-824b-4d79-83a8-19b0c96b5db7 container client-container: 
STEP: delete the pod
Jan 30 15:06:03.636: INFO: Waiting for pod downwardapi-volume-008f8d9c-824b-4d79-83a8-19b0c96b5db7 to disappear
Jan 30 15:06:03.726: INFO: Pod downwardapi-volume-008f8d9c-824b-4d79-83a8-19b0c96b5db7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:06:03.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4926" for this suite.
Jan 30 15:06:09.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:06:09.903: INFO: namespace projected-4926 deletion completed in 6.166696331s

• [SLOW TEST:17.084 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:06:09.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 30 15:06:09.954: INFO: Creating deployment "test-recreate-deployment"
Jan 30 15:06:09.960: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 30 15:06:10.107: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan 30 15:06:12.122: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 30 15:06:12.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715993570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715993570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715993570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715993569, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 15:06:14.142: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715993570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715993570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715993570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715993569, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 15:06:16.143: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715993570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715993570, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715993570, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715993569, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 15:06:18.137: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 30 15:06:18.147: INFO: Updating deployment test-recreate-deployment
Jan 30 15:06:18.147: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 30 15:06:18.617: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-2116,SelfLink:/apis/apps/v1/namespaces/deployment-2116/deployments/test-recreate-deployment,UID:44c63341-e746-47c6-a1e6-249acfccf9a7,ResourceVersion:22455263,Generation:2,CreationTimestamp:2020-01-30 15:06:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-30 15:06:18 +0000 UTC 2020-01-30 15:06:18 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-30 15:06:18 +0000 UTC 2020-01-30 15:06:09 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 30 15:06:18.640: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-2116,SelfLink:/apis/apps/v1/namespaces/deployment-2116/replicasets/test-recreate-deployment-5c8c9cc69d,UID:775a524b-b60b-40e5-841d-b47af86eac04,ResourceVersion:22455261,Generation:1,CreationTimestamp:2020-01-30 15:06:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 44c63341-e746-47c6-a1e6-249acfccf9a7 0xc0025a3357 0xc0025a3358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 30 15:06:18.640: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 30 15:06:18.640: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-2116,SelfLink:/apis/apps/v1/namespaces/deployment-2116/replicasets/test-recreate-deployment-6df85df6b9,UID:d7455b8a-b85a-41a7-80ad-3a94389e30d0,ResourceVersion:22455252,Generation:2,CreationTimestamp:2020-01-30 15:06:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 44c63341-e746-47c6-a1e6-249acfccf9a7 0xc0025a3427 0xc0025a3428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 30 15:06:18.650: INFO: Pod "test-recreate-deployment-5c8c9cc69d-5rg94" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-5rg94,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-2116,SelfLink:/api/v1/namespaces/deployment-2116/pods/test-recreate-deployment-5c8c9cc69d-5rg94,UID:7ce2f08b-73ec-4af2-88c0-d54fb984c29b,ResourceVersion:22455259,Generation:0,CreationTimestamp:2020-01-30 15:06:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 775a524b-b60b-40e5-841d-b47af86eac04 0xc002f8d5b7 0xc002f8d5b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c4ft5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c4ft5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c4ft5 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002f8d630} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002f8d650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 15:06:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:06:18.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2116" for this suite.
Jan 30 15:06:24.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:06:24.839: INFO: namespace deployment-2116 deletion completed in 6.183443802s

• [SLOW TEST:14.935 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:06:24.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan 30 15:06:24.972: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:06:41.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6274" for this suite.
Jan 30 15:06:47.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:06:47.324: INFO: namespace pods-6274 deletion completed in 6.199976329s

• [SLOW TEST:22.485 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:06:47.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 30 15:06:47.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 30 15:06:47.648: INFO: stderr: ""
Jan 30 15:06:47.648: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:06:47.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4581" for this suite.
Jan 30 15:06:53.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:06:53.841: INFO: namespace kubectl-4581 deletion completed in 6.182541184s

• [SLOW TEST:6.517 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:06:53.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 30 15:07:00.060: INFO: 0 pods remaining
Jan 30 15:07:00.060: INFO: 0 pods has nil DeletionTimestamp
Jan 30 15:07:00.060: INFO: 
STEP: Gathering metrics
W0130 15:07:01.001341       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 30 15:07:01.001: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:07:01.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1643" for this suite.
Jan 30 15:07:11.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:07:11.123: INFO: namespace gc-1643 deletion completed in 10.1144646s

• [SLOW TEST:17.281 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:07:11.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-62778e4c-ece8-4bfe-81ca-017f86803ddf
Jan 30 15:07:11.272: INFO: Pod name my-hostname-basic-62778e4c-ece8-4bfe-81ca-017f86803ddf: Found 0 pods out of 1
Jan 30 15:07:16.289: INFO: Pod name my-hostname-basic-62778e4c-ece8-4bfe-81ca-017f86803ddf: Found 1 pods out of 1
Jan 30 15:07:16.289: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-62778e4c-ece8-4bfe-81ca-017f86803ddf" are running
Jan 30 15:07:22.325: INFO: Pod "my-hostname-basic-62778e4c-ece8-4bfe-81ca-017f86803ddf-kx8lp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 15:07:11 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 15:07:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-62778e4c-ece8-4bfe-81ca-017f86803ddf]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 15:07:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-62778e4c-ece8-4bfe-81ca-017f86803ddf]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 15:07:11 +0000 UTC Reason: Message:}])
Jan 30 15:07:22.326: INFO: Trying to dial the pod
Jan 30 15:07:27.380: INFO: Controller my-hostname-basic-62778e4c-ece8-4bfe-81ca-017f86803ddf: Got expected result from replica 1 [my-hostname-basic-62778e4c-ece8-4bfe-81ca-017f86803ddf-kx8lp]: "my-hostname-basic-62778e4c-ece8-4bfe-81ca-017f86803ddf-kx8lp", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:07:27.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-821" for this suite.
Jan 30 15:07:33.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:07:33.554: INFO: namespace replication-controller-821 deletion completed in 6.164181038s

• [SLOW TEST:22.431 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:07:33.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 30 15:07:34.336: INFO: Pod name wrapped-volume-race-b3d6b16f-bd7d-4b7a-8dc2-fb8a1e110750: Found 0 pods out of 5
Jan 30 15:07:39.401: INFO: Pod name wrapped-volume-race-b3d6b16f-bd7d-4b7a-8dc2-fb8a1e110750: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b3d6b16f-bd7d-4b7a-8dc2-fb8a1e110750 in namespace emptydir-wrapper-3829, will wait for the garbage collector to delete the pods
Jan 30 15:08:07.704: INFO: Deleting ReplicationController wrapped-volume-race-b3d6b16f-bd7d-4b7a-8dc2-fb8a1e110750 took: 10.579916ms
Jan 30 15:08:08.204: INFO: Terminating ReplicationController wrapped-volume-race-b3d6b16f-bd7d-4b7a-8dc2-fb8a1e110750 pods took: 500.508981ms
STEP: Creating RC which spawns configmap-volume pods
Jan 30 15:08:57.663: INFO: Pod name wrapped-volume-race-613502c2-1022-4f2e-b609-0d4b6a2766ac: Found 0 pods out of 5
Jan 30 15:09:02.694: INFO: Pod name wrapped-volume-race-613502c2-1022-4f2e-b609-0d4b6a2766ac: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-613502c2-1022-4f2e-b609-0d4b6a2766ac in namespace emptydir-wrapper-3829, will wait for the garbage collector to delete the pods
Jan 30 15:09:38.834: INFO: Deleting ReplicationController wrapped-volume-race-613502c2-1022-4f2e-b609-0d4b6a2766ac took: 9.769923ms
Jan 30 15:09:39.235: INFO: Terminating ReplicationController wrapped-volume-race-613502c2-1022-4f2e-b609-0d4b6a2766ac pods took: 401.36252ms
STEP: Creating RC which spawns configmap-volume pods
Jan 30 15:10:27.499: INFO: Pod name wrapped-volume-race-fb31c94b-39c2-4f1e-af61-65f2ccab5853: Found 0 pods out of 5
Jan 30 15:10:32.537: INFO: Pod name wrapped-volume-race-fb31c94b-39c2-4f1e-af61-65f2ccab5853: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-fb31c94b-39c2-4f1e-af61-65f2ccab5853 in namespace emptydir-wrapper-3829, will wait for the garbage collector to delete the pods
Jan 30 15:11:06.725: INFO: Deleting ReplicationController wrapped-volume-race-fb31c94b-39c2-4f1e-af61-65f2ccab5853 took: 22.591321ms
Jan 30 15:11:07.026: INFO: Terminating ReplicationController wrapped-volume-race-fb31c94b-39c2-4f1e-af61-65f2ccab5853 pods took: 301.389201ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:11:57.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3829" for this suite.
Jan 30 15:12:07.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:12:07.794: INFO: namespace emptydir-wrapper-3829 deletion completed in 10.158457998s

• [SLOW TEST:274.240 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 30 15:12:07.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 30 15:12:19.870: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 30 15:12:19.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9600" for this suite.
Jan 30 15:12:26.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 30 15:12:26.109: INFO: namespace container-runtime-9600 deletion completed in 6.122196003s

• [SLOW TEST:18.315 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSJan 30 15:12:26.110: INFO: Running AfterSuite actions on all nodes
Jan 30 15:12:26.110: INFO: Running AfterSuite actions on node 1
Jan 30 15:12:26.110: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8162.248 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS