I1218 12:56:09.789881 8 e2e.go:243] Starting e2e run "cf51e942-6928-4af1-b147-86224600ce26" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1576673768 - Will randomize all specs Will run 215 of 4412 specs Dec 18 12:56:10.137: INFO: >>> kubeConfig: /root/.kube/config Dec 18 12:56:10.141: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 18 12:56:10.211: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 18 12:56:10.269: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 18 12:56:10.269: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 18 12:56:10.269: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 18 12:56:10.279: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 18 12:56:10.279: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 18 12:56:10.279: INFO: e2e test version: v1.15.7 Dec 18 12:56:10.281: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 18 12:56:10.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Dec 18 12:56:10.459: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 18 12:56:10.631: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca7f32f5-8473-4254-8424-a4eac7f4e59f" in namespace "projected-5762" to be "success or failure" Dec 18 12:56:10.647: INFO: Pod "downwardapi-volume-ca7f32f5-8473-4254-8424-a4eac7f4e59f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.645075ms Dec 18 12:56:12.662: INFO: Pod "downwardapi-volume-ca7f32f5-8473-4254-8424-a4eac7f4e59f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030634691s Dec 18 12:56:14.677: INFO: Pod "downwardapi-volume-ca7f32f5-8473-4254-8424-a4eac7f4e59f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045879508s Dec 18 12:56:16.691: INFO: Pod "downwardapi-volume-ca7f32f5-8473-4254-8424-a4eac7f4e59f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060013947s Dec 18 12:56:20.270: INFO: Pod "downwardapi-volume-ca7f32f5-8473-4254-8424-a4eac7f4e59f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.638681462s Dec 18 12:56:22.276: INFO: Pod "downwardapi-volume-ca7f32f5-8473-4254-8424-a4eac7f4e59f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.64505106s STEP: Saw pod success Dec 18 12:56:22.276: INFO: Pod "downwardapi-volume-ca7f32f5-8473-4254-8424-a4eac7f4e59f" satisfied condition "success or failure" Dec 18 12:56:22.279: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ca7f32f5-8473-4254-8424-a4eac7f4e59f container client-container: STEP: delete the pod Dec 18 12:56:22.370: INFO: Waiting for pod downwardapi-volume-ca7f32f5-8473-4254-8424-a4eac7f4e59f to disappear Dec 18 12:56:22.475: INFO: Pod downwardapi-volume-ca7f32f5-8473-4254-8424-a4eac7f4e59f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 18 12:56:22.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5762" for this suite. Dec 18 12:56:28.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 12:56:28.768: INFO: namespace projected-5762 deletion completed in 6.275996526s • [SLOW TEST:18.486 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 18 12:56:28.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 18 12:56:38.023: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 18 12:56:38.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2020" for this suite. Dec 18 12:56:44.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 12:56:44.315: INFO: namespace container-runtime-2020 deletion completed in 6.148649591s • [SLOW TEST:15.547 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 18 12:56:44.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Dec 18 12:56:44.385: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Dec 18 12:56:44.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-817' Dec 18 12:56:46.669: INFO: stderr: "" Dec 18 12:56:46.669: INFO: stdout: "service/redis-slave created\n" Dec 18 12:56:46.670: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Dec 18 12:56:46.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-817' Dec 18 12:56:47.205: INFO: stderr: "" Dec 18 12:56:47.205: INFO: stdout: "service/redis-master created\n" Dec 18 12:56:47.206: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Dec 18 12:56:47.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-817' Dec 18 12:56:47.683: INFO: stderr: "" Dec 18 12:56:47.683: INFO: stdout: "service/frontend created\n" Dec 18 12:56:47.685: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Dec 18 12:56:47.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-817' Dec 18 12:56:48.103: INFO: stderr: "" Dec 18 12:56:48.104: INFO: stdout: "deployment.apps/frontend created\n" Dec 18 12:56:48.105: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Dec 18 12:56:48.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-817' Dec 18 12:56:48.740: INFO: stderr: "" Dec 18 12:56:48.740: INFO: stdout: "deployment.apps/redis-master created\n" Dec 18 12:56:48.741: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Dec 18 12:56:48.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-817' Dec 18 12:56:51.267: INFO: stderr: "" Dec 18 12:56:51.267: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Dec 18 12:56:51.267: INFO: Waiting for all frontend pods to be Running. Dec 18 12:57:16.320: INFO: Waiting for frontend to serve content. Dec 18 12:57:16.388: INFO: Trying to add a new entry to the guestbook. Dec 18 12:57:16.416: INFO: Verifying that added entry can be retrieved. Dec 18 12:57:16.459: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources Dec 18 12:57:21.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-817' Dec 18 12:57:21.884: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 18 12:57:21.884: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Dec 18 12:57:21.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-817' Dec 18 12:57:22.188: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 18 12:57:22.188: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 18 12:57:22.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-817' Dec 18 12:57:22.402: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 18 12:57:22.402: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 18 12:57:22.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-817' Dec 18 12:57:22.588: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 18 12:57:22.588: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Dec 18 12:57:22.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-817' Dec 18 12:57:22.776: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 18 12:57:22.777: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Dec 18 12:57:22.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-817' Dec 18 12:57:23.102: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 18 12:57:23.102: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 18 12:57:23.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-817" for this suite. Dec 18 12:58:03.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 12:58:03.361: INFO: namespace kubectl-817 deletion completed in 40.17089209s • [SLOW TEST:79.046 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 18 12:58:03.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-2197 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2197 to expose endpoints map[] Dec 18 12:58:03.890: INFO: successfully validated that service multi-endpoint-test in namespace services-2197 exposes endpoints map[] (162.857045ms elapsed) STEP: Creating pod pod1 in namespace services-2197 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2197 to expose endpoints map[pod1:[100]] Dec 18 12:58:08.063: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.142607946s elapsed, will retry) Dec 18 12:58:15.218: INFO: successfully validated that service multi-endpoint-test in namespace services-2197 exposes endpoints map[pod1:[100]] (11.297434971s elapsed) STEP: Creating pod pod2 in namespace services-2197 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2197 to expose endpoints map[pod1:[100] pod2:[101]] Dec 18 12:58:22.768: INFO: Unexpected endpoints: found map[1a026f24-6c4e-49b3-aa8c-40f20c86ddf2:[100]], expected map[pod1:[100] pod2:[101]] (7.521414679s elapsed, will retry) Dec 18 12:58:25.844: INFO: successfully validated that service multi-endpoint-test in namespace services-2197 exposes endpoints map[pod1:[100] pod2:[101]] (10.597400046s elapsed) STEP: Deleting pod pod1 in namespace services-2197 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2197 to expose endpoints map[pod2:[101]] Dec 18 12:58:27.045: INFO: successfully validated that service multi-endpoint-test in namespace services-2197 exposes endpoints map[pod2:[101]] (1.184177895s elapsed) STEP: Deleting pod pod2 in namespace services-2197 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2197 to expose endpoints map[] Dec 18 12:58:28.082: INFO: successfully validated that service multi-endpoint-test in namespace services-2197 exposes endpoints map[] (1.030497922s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 18 12:58:28.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2197" for this suite. Dec 18 12:58:51.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 12:58:51.207: INFO: namespace services-2197 deletion completed in 22.242757468s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:47.846 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 18 12:58:51.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-jm2n STEP: Creating a pod to test atomic-volume-subpath Dec 18 12:58:51.481: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jm2n" in namespace "subpath-457" to be "success or failure" Dec 18 12:58:51.486: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.945324ms Dec 18 12:58:53.521: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040116295s Dec 18 12:58:55.537: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055944247s Dec 18 12:58:57.621: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139879677s Dec 18 12:58:59.632: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Running", Reason="", readiness=true. Elapsed: 8.151163223s Dec 18 12:59:01.914: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Running", Reason="", readiness=true. Elapsed: 10.43319756s Dec 18 12:59:03.962: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Running", Reason="", readiness=true. Elapsed: 12.481753196s Dec 18 12:59:05.972: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Running", Reason="", readiness=true. Elapsed: 14.491101093s Dec 18 12:59:07.984: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Running", Reason="", readiness=true. Elapsed: 16.50327534s Dec 18 12:59:09.994: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Running", Reason="", readiness=true. Elapsed: 18.513110336s Dec 18 12:59:12.002: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Running", Reason="", readiness=true. Elapsed: 20.521052186s Dec 18 12:59:14.010: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Running", Reason="", readiness=true. Elapsed: 22.529658748s Dec 18 12:59:16.018: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Running", Reason="", readiness=true. Elapsed: 24.537050985s Dec 18 12:59:18.035: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Running", Reason="", readiness=true. Elapsed: 26.553837177s Dec 18 12:59:20.040: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Running", Reason="", readiness=true. Elapsed: 28.559690693s Dec 18 12:59:22.155: INFO: Pod "pod-subpath-test-configmap-jm2n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.673797888s STEP: Saw pod success Dec 18 12:59:22.155: INFO: Pod "pod-subpath-test-configmap-jm2n" satisfied condition "success or failure" Dec 18 12:59:22.161: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-jm2n container test-container-subpath-configmap-jm2n: STEP: delete the pod Dec 18 12:59:22.429: INFO: Waiting for pod pod-subpath-test-configmap-jm2n to disappear Dec 18 12:59:22.444: INFO: Pod pod-subpath-test-configmap-jm2n no longer exists STEP: Deleting pod pod-subpath-test-configmap-jm2n Dec 18 12:59:22.444: INFO: Deleting pod "pod-subpath-test-configmap-jm2n" in namespace "subpath-457" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 18 12:59:22.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-457" for this suite. Dec 18 12:59:28.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 12:59:28.655: INFO: namespace subpath-457 deletion completed in 6.19748214s • [SLOW TEST:37.447 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 18 12:59:28.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 18 12:59:28.728: INFO: Waiting up to 5m0s for pod "downward-api-391367fa-1798-4b73-917d-7baaa7647fdb" in namespace "downward-api-9318" to be "success or failure" Dec 18 12:59:28.853: INFO: Pod "downward-api-391367fa-1798-4b73-917d-7baaa7647fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 125.392482ms Dec 18 12:59:30.870: INFO: Pod "downward-api-391367fa-1798-4b73-917d-7baaa7647fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142077654s Dec 18 12:59:32.895: INFO: Pod "downward-api-391367fa-1798-4b73-917d-7baaa7647fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16763012s Dec 18 12:59:34.907: INFO: Pod "downward-api-391367fa-1798-4b73-917d-7baaa7647fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178979215s Dec 18 12:59:36.922: INFO: Pod "downward-api-391367fa-1798-4b73-917d-7baaa7647fdb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194336123s Dec 18 12:59:38.931: INFO: Pod "downward-api-391367fa-1798-4b73-917d-7baaa7647fdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.203398429s STEP: Saw pod success Dec 18 12:59:38.931: INFO: Pod "downward-api-391367fa-1798-4b73-917d-7baaa7647fdb" satisfied condition "success or failure" Dec 18 12:59:38.935: INFO: Trying to get logs from node iruya-node pod downward-api-391367fa-1798-4b73-917d-7baaa7647fdb container dapi-container: STEP: delete the pod Dec 18 12:59:39.164: INFO: Waiting for pod downward-api-391367fa-1798-4b73-917d-7baaa7647fdb to disappear Dec 18 12:59:39.171: INFO: Pod downward-api-391367fa-1798-4b73-917d-7baaa7647fdb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 18 12:59:39.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9318" for this suite. Dec 18 12:59:45.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 12:59:45.349: INFO: namespace downward-api-9318 deletion completed in 6.168540575s • [SLOW TEST:16.693 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 18 12:59:45.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 18 12:59:45.462: INFO: Waiting up to 5m0s for pod "downward-api-fbbf3501-5c43-4275-902f-139b753be30f" in namespace "downward-api-3685" to be "success or failure" Dec 18 12:59:45.469: INFO: Pod "downward-api-fbbf3501-5c43-4275-902f-139b753be30f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088174ms Dec 18 12:59:47.478: INFO: Pod "downward-api-fbbf3501-5c43-4275-902f-139b753be30f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015579406s Dec 18 12:59:49.489: INFO: Pod "downward-api-fbbf3501-5c43-4275-902f-139b753be30f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025940981s Dec 18 12:59:51.516: INFO: Pod "downward-api-fbbf3501-5c43-4275-902f-139b753be30f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053280316s Dec 18 12:59:53.544: INFO: Pod "downward-api-fbbf3501-5c43-4275-902f-139b753be30f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081191969s Dec 18 12:59:55.558: INFO: Pod "downward-api-fbbf3501-5c43-4275-902f-139b753be30f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095703951s STEP: Saw pod success Dec 18 12:59:55.559: INFO: Pod "downward-api-fbbf3501-5c43-4275-902f-139b753be30f" satisfied condition "success or failure" Dec 18 12:59:55.564: INFO: Trying to get logs from node iruya-node pod downward-api-fbbf3501-5c43-4275-902f-139b753be30f container dapi-container: STEP: delete the pod Dec 18 12:59:55.641: INFO: Waiting for pod downward-api-fbbf3501-5c43-4275-902f-139b753be30f to disappear Dec 18 12:59:55.687: INFO: Pod downward-api-fbbf3501-5c43-4275-902f-139b753be30f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 18 12:59:55.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3685" for this suite. Dec 18 13:00:01.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 13:00:01.884: INFO: namespace downward-api-3685 deletion completed in 6.188767404s • [SLOW TEST:16.535 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 18 13:00:01.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-adaf2176-3f0a-485f-9400-786e9ca2a89a STEP: Creating a pod to test consume secrets Dec 18 13:00:02.020: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7ebb6c5c-f30f-4427-bbdb-80104b5ef122" in namespace "projected-7296" to be "success or failure" Dec 18 13:00:02.118: INFO: Pod "pod-projected-secrets-7ebb6c5c-f30f-4427-bbdb-80104b5ef122": Phase="Pending", Reason="", readiness=false. Elapsed: 97.22678ms Dec 18 13:00:04.124: INFO: Pod "pod-projected-secrets-7ebb6c5c-f30f-4427-bbdb-80104b5ef122": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103411996s Dec 18 13:00:06.132: INFO: Pod "pod-projected-secrets-7ebb6c5c-f30f-4427-bbdb-80104b5ef122": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111562597s Dec 18 13:00:08.171: INFO: Pod "pod-projected-secrets-7ebb6c5c-f30f-4427-bbdb-80104b5ef122": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15076405s Dec 18 13:00:10.183: INFO: Pod "pod-projected-secrets-7ebb6c5c-f30f-4427-bbdb-80104b5ef122": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.16251238s STEP: Saw pod success Dec 18 13:00:10.183: INFO: Pod "pod-projected-secrets-7ebb6c5c-f30f-4427-bbdb-80104b5ef122" satisfied condition "success or failure" Dec 18 13:00:10.190: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-7ebb6c5c-f30f-4427-bbdb-80104b5ef122 container secret-volume-test: STEP: delete the pod Dec 18 13:00:10.485: INFO: Waiting for pod pod-projected-secrets-7ebb6c5c-f30f-4427-bbdb-80104b5ef122 to disappear Dec 18 13:00:10.511: INFO: Pod pod-projected-secrets-7ebb6c5c-f30f-4427-bbdb-80104b5ef122 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 18 13:00:10.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7296" for this suite. Dec 18 13:00:16.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 13:00:16.719: INFO: namespace projected-7296 deletion completed in 6.198888214s • [SLOW TEST:14.834 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 18 13:00:16.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Dec 18 13:00:16.867: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 18 13:00:16.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4466" for this suite. Dec 18 13:00:23.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 18 13:00:23.125: INFO: namespace kubectl-4466 deletion completed in 6.134649065s • [SLOW TEST:6.405 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 18 13:00:23.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 18 13:00:23.271: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 17.48157ms)
Dec 18 13:00:23.281: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.701683ms)
Dec 18 13:00:23.286: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.272153ms)
Dec 18 13:00:23.291: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.165918ms)
Dec 18 13:00:23.298: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.665932ms)
Dec 18 13:00:23.307: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.931268ms)
Dec 18 13:00:23.315: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.637663ms)
Dec 18 13:00:23.319: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.693316ms)
Dec 18 13:00:23.324: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.332276ms)
Dec 18 13:00:23.330: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.495142ms)
Dec 18 13:00:23.359: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 28.651567ms)
Dec 18 13:00:23.369: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.406378ms)
Dec 18 13:00:23.381: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.765305ms)
Dec 18 13:00:23.385: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.088133ms)
Dec 18 13:00:23.390: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.485607ms)
Dec 18 13:00:23.394: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.895405ms)
Dec 18 13:00:23.398: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.506166ms)
Dec 18 13:00:23.402: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.546844ms)
Dec 18 13:00:23.408: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.870483ms)
Dec 18 13:00:23.412: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.074457ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:00:23.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4519" for this suite.
Dec 18 13:00:29.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:00:29.615: INFO: namespace proxy-4519 deletion completed in 6.198616815s

• [SLOW TEST:6.489 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:00:29.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Dec 18 13:00:29.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 18 13:00:29.935: INFO: stderr: ""
Dec 18 13:00:29.935: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:00:29.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-494" for this suite.
Dec 18 13:00:36.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:00:36.107: INFO: namespace kubectl-494 deletion completed in 6.160354534s

• [SLOW TEST:6.492 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:00:36.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-049affe4-30c9-4de4-825f-417574b427a4
STEP: Creating a pod to test consume secrets
Dec 18 13:00:36.247: INFO: Waiting up to 5m0s for pod "pod-secrets-ec927715-276a-436f-99f6-b6e0b833af2f" in namespace "secrets-7995" to be "success or failure"
Dec 18 13:00:36.254: INFO: Pod "pod-secrets-ec927715-276a-436f-99f6-b6e0b833af2f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.255552ms
Dec 18 13:00:38.264: INFO: Pod "pod-secrets-ec927715-276a-436f-99f6-b6e0b833af2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016946806s
Dec 18 13:00:40.283: INFO: Pod "pod-secrets-ec927715-276a-436f-99f6-b6e0b833af2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035870708s
Dec 18 13:00:42.293: INFO: Pod "pod-secrets-ec927715-276a-436f-99f6-b6e0b833af2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046329553s
Dec 18 13:00:44.304: INFO: Pod "pod-secrets-ec927715-276a-436f-99f6-b6e0b833af2f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056806142s
Dec 18 13:00:46.317: INFO: Pod "pod-secrets-ec927715-276a-436f-99f6-b6e0b833af2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070308808s
STEP: Saw pod success
Dec 18 13:00:46.317: INFO: Pod "pod-secrets-ec927715-276a-436f-99f6-b6e0b833af2f" satisfied condition "success or failure"
Dec 18 13:00:46.322: INFO: Trying to get logs from node iruya-node pod pod-secrets-ec927715-276a-436f-99f6-b6e0b833af2f container secret-env-test: 
STEP: delete the pod
Dec 18 13:00:46.407: INFO: Waiting for pod pod-secrets-ec927715-276a-436f-99f6-b6e0b833af2f to disappear
Dec 18 13:00:46.413: INFO: Pod pod-secrets-ec927715-276a-436f-99f6-b6e0b833af2f no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:00:46.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7995" for this suite.
Dec 18 13:00:52.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:00:52.559: INFO: namespace secrets-7995 deletion completed in 6.136228001s

• [SLOW TEST:16.450 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:00:52.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:01:01.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2938" for this suite.
Dec 18 13:01:07.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:01:07.409: INFO: namespace emptydir-wrapper-2938 deletion completed in 6.306090397s

• [SLOW TEST:14.848 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:01:07.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 13:01:07.474: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99988ec7-a50c-4a43-9c52-a891f7739ca4" in namespace "downward-api-5807" to be "success or failure"
Dec 18 13:01:07.479: INFO: Pod "downwardapi-volume-99988ec7-a50c-4a43-9c52-a891f7739ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017267ms
Dec 18 13:01:09.490: INFO: Pod "downwardapi-volume-99988ec7-a50c-4a43-9c52-a891f7739ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014951975s
Dec 18 13:01:11.514: INFO: Pod "downwardapi-volume-99988ec7-a50c-4a43-9c52-a891f7739ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038959031s
Dec 18 13:01:13.527: INFO: Pod "downwardapi-volume-99988ec7-a50c-4a43-9c52-a891f7739ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052437318s
Dec 18 13:01:15.540: INFO: Pod "downwardapi-volume-99988ec7-a50c-4a43-9c52-a891f7739ca4": Phase="Running", Reason="", readiness=true. Elapsed: 8.065450022s
Dec 18 13:01:17.549: INFO: Pod "downwardapi-volume-99988ec7-a50c-4a43-9c52-a891f7739ca4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074454199s
STEP: Saw pod success
Dec 18 13:01:17.549: INFO: Pod "downwardapi-volume-99988ec7-a50c-4a43-9c52-a891f7739ca4" satisfied condition "success or failure"
Dec 18 13:01:17.555: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-99988ec7-a50c-4a43-9c52-a891f7739ca4 container client-container: 
STEP: delete the pod
Dec 18 13:01:17.689: INFO: Waiting for pod downwardapi-volume-99988ec7-a50c-4a43-9c52-a891f7739ca4 to disappear
Dec 18 13:01:17.696: INFO: Pod downwardapi-volume-99988ec7-a50c-4a43-9c52-a891f7739ca4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:01:17.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5807" for this suite.
Dec 18 13:01:23.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:01:23.941: INFO: namespace downward-api-5807 deletion completed in 6.237694448s

• [SLOW TEST:16.532 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:01:23.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-3d596b98-eaa6-4565-a536-7cf4c5ba699c
STEP: Creating a pod to test consume configMaps
Dec 18 13:01:24.059: INFO: Waiting up to 5m0s for pod "pod-configmaps-24bddd31-2044-427d-943c-218ccb4e4028" in namespace "configmap-2715" to be "success or failure"
Dec 18 13:01:24.068: INFO: Pod "pod-configmaps-24bddd31-2044-427d-943c-218ccb4e4028": Phase="Pending", Reason="", readiness=false. Elapsed: 8.389434ms
Dec 18 13:01:26.074: INFO: Pod "pod-configmaps-24bddd31-2044-427d-943c-218ccb4e4028": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014768173s
Dec 18 13:01:28.788: INFO: Pod "pod-configmaps-24bddd31-2044-427d-943c-218ccb4e4028": Phase="Pending", Reason="", readiness=false. Elapsed: 4.728226859s
Dec 18 13:01:30.797: INFO: Pod "pod-configmaps-24bddd31-2044-427d-943c-218ccb4e4028": Phase="Pending", Reason="", readiness=false. Elapsed: 6.73684144s
Dec 18 13:01:32.805: INFO: Pod "pod-configmaps-24bddd31-2044-427d-943c-218ccb4e4028": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.745686266s
STEP: Saw pod success
Dec 18 13:01:32.806: INFO: Pod "pod-configmaps-24bddd31-2044-427d-943c-218ccb4e4028" satisfied condition "success or failure"
Dec 18 13:01:32.810: INFO: Trying to get logs from node iruya-node pod pod-configmaps-24bddd31-2044-427d-943c-218ccb4e4028 container configmap-volume-test: 
STEP: delete the pod
Dec 18 13:01:32.918: INFO: Waiting for pod pod-configmaps-24bddd31-2044-427d-943c-218ccb4e4028 to disappear
Dec 18 13:01:33.100: INFO: Pod pod-configmaps-24bddd31-2044-427d-943c-218ccb4e4028 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:01:33.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2715" for this suite.
Dec 18 13:01:39.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:01:39.253: INFO: namespace configmap-2715 deletion completed in 6.143381317s

• [SLOW TEST:15.312 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:01:39.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 18 13:04:41.672: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:04:41.711: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:04:43.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:04:43.728: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:04:45.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:04:45.723: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:04:47.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:04:47.722: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:04:49.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:04:49.722: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:04:51.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:04:51.737: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:04:53.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:04:53.724: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:04:55.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:04:55.720: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:04:57.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:04:57.721: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:04:59.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:04:59.733: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:01.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:01.723: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:03.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:03.725: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:05.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:05.723: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:07.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:07.723: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:09.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:09.726: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:11.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:11.721: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:13.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:13.822: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:15.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:15.720: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:17.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:17.718: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:19.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:19.725: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:21.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:21.727: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:23.713: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:23.723: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:25.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:25.721: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:27.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:27.725: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:29.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:29.733: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:31.713: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:31.723: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:33.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:33.725: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:35.714: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:35.721: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:37.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:37.724: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:39.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:39.726: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:41.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:41.723: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:43.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:43.721: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:45.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:45.765: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:47.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:47.723: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:49.713: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:49.905: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:51.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:51.723: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:53.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:53.725: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:55.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:55.727: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:57.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:57.719: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:05:59.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:05:59.721: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:06:01.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:06:01.730: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:06:03.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:06:03.721: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:06:05.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:06:05.721: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:06:07.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:06:07.726: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:06:09.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:06:09.739: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:06:11.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:06:11.724: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:06:13.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:06:13.727: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:06:15.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:06:15.738: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:06:17.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:06:17.741: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:06:19.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:06:19.723: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:06:21.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:06:21.727: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:06:23.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:06:23.725: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:06:25.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:06:25.720: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:06:27.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:06:27.721: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 18 13:06:29.712: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 18 13:06:29.725: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:06:29.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-280" for this suite.
Dec 18 13:06:51.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:06:51.997: INFO: namespace container-lifecycle-hook-280 deletion completed in 22.259898577s

• [SLOW TEST:312.743 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:06:51.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 13:06:52.149: INFO: Waiting up to 5m0s for pod "downwardapi-volume-683e7bdf-2100-40a8-96c3-7eedad3d4b98" in namespace "downward-api-8802" to be "success or failure"
Dec 18 13:06:52.161: INFO: Pod "downwardapi-volume-683e7bdf-2100-40a8-96c3-7eedad3d4b98": Phase="Pending", Reason="", readiness=false. Elapsed: 11.263088ms
Dec 18 13:06:54.169: INFO: Pod "downwardapi-volume-683e7bdf-2100-40a8-96c3-7eedad3d4b98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019187645s
Dec 18 13:06:56.185: INFO: Pod "downwardapi-volume-683e7bdf-2100-40a8-96c3-7eedad3d4b98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034871427s
Dec 18 13:06:58.195: INFO: Pod "downwardapi-volume-683e7bdf-2100-40a8-96c3-7eedad3d4b98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045340697s
Dec 18 13:07:00.208: INFO: Pod "downwardapi-volume-683e7bdf-2100-40a8-96c3-7eedad3d4b98": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058158209s
Dec 18 13:07:02.218: INFO: Pod "downwardapi-volume-683e7bdf-2100-40a8-96c3-7eedad3d4b98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067804874s
STEP: Saw pod success
Dec 18 13:07:02.218: INFO: Pod "downwardapi-volume-683e7bdf-2100-40a8-96c3-7eedad3d4b98" satisfied condition "success or failure"
Dec 18 13:07:02.225: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-683e7bdf-2100-40a8-96c3-7eedad3d4b98 container client-container: 
STEP: delete the pod
Dec 18 13:07:02.483: INFO: Waiting for pod downwardapi-volume-683e7bdf-2100-40a8-96c3-7eedad3d4b98 to disappear
Dec 18 13:07:02.493: INFO: Pod downwardapi-volume-683e7bdf-2100-40a8-96c3-7eedad3d4b98 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:07:02.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8802" for this suite.
Dec 18 13:07:08.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:07:08.714: INFO: namespace downward-api-8802 deletion completed in 6.211855206s

• [SLOW TEST:16.716 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:07:08.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 18 13:07:08.845: INFO: Waiting up to 5m0s for pod "pod-3ee52281-854e-4590-8354-9f32b95fcf50" in namespace "emptydir-6346" to be "success or failure"
Dec 18 13:07:08.927: INFO: Pod "pod-3ee52281-854e-4590-8354-9f32b95fcf50": Phase="Pending", Reason="", readiness=false. Elapsed: 82.202061ms
Dec 18 13:07:10.939: INFO: Pod "pod-3ee52281-854e-4590-8354-9f32b95fcf50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093960605s
Dec 18 13:07:12.953: INFO: Pod "pod-3ee52281-854e-4590-8354-9f32b95fcf50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108088317s
Dec 18 13:07:14.960: INFO: Pod "pod-3ee52281-854e-4590-8354-9f32b95fcf50": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115051656s
Dec 18 13:07:16.969: INFO: Pod "pod-3ee52281-854e-4590-8354-9f32b95fcf50": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123615426s
Dec 18 13:07:19.639: INFO: Pod "pod-3ee52281-854e-4590-8354-9f32b95fcf50": Phase="Pending", Reason="", readiness=false. Elapsed: 10.793908823s
Dec 18 13:07:21.653: INFO: Pod "pod-3ee52281-854e-4590-8354-9f32b95fcf50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.808091746s
STEP: Saw pod success
Dec 18 13:07:21.654: INFO: Pod "pod-3ee52281-854e-4590-8354-9f32b95fcf50" satisfied condition "success or failure"
Dec 18 13:07:21.663: INFO: Trying to get logs from node iruya-node pod pod-3ee52281-854e-4590-8354-9f32b95fcf50 container test-container: 
STEP: delete the pod
Dec 18 13:07:21.846: INFO: Waiting for pod pod-3ee52281-854e-4590-8354-9f32b95fcf50 to disappear
Dec 18 13:07:21.858: INFO: Pod pod-3ee52281-854e-4590-8354-9f32b95fcf50 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:07:21.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6346" for this suite.
Dec 18 13:07:28.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:07:28.472: INFO: namespace emptydir-6346 deletion completed in 6.602500909s

• [SLOW TEST:19.759 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:07:28.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-10d729ce-f9ee-41ef-a809-d89d5518eb5f
STEP: Creating a pod to test consume secrets
Dec 18 13:07:28.674: INFO: Waiting up to 5m0s for pod "pod-secrets-88caf0c8-e827-49e4-952d-196f3c56c48f" in namespace "secrets-2626" to be "success or failure"
Dec 18 13:07:28.713: INFO: Pod "pod-secrets-88caf0c8-e827-49e4-952d-196f3c56c48f": Phase="Pending", Reason="", readiness=false. Elapsed: 39.105943ms
Dec 18 13:07:30.721: INFO: Pod "pod-secrets-88caf0c8-e827-49e4-952d-196f3c56c48f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046792788s
Dec 18 13:07:32.729: INFO: Pod "pod-secrets-88caf0c8-e827-49e4-952d-196f3c56c48f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054639948s
Dec 18 13:07:34.759: INFO: Pod "pod-secrets-88caf0c8-e827-49e4-952d-196f3c56c48f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084571539s
Dec 18 13:07:36.786: INFO: Pod "pod-secrets-88caf0c8-e827-49e4-952d-196f3c56c48f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111534397s
STEP: Saw pod success
Dec 18 13:07:36.786: INFO: Pod "pod-secrets-88caf0c8-e827-49e4-952d-196f3c56c48f" satisfied condition "success or failure"
Dec 18 13:07:36.818: INFO: Trying to get logs from node iruya-node pod pod-secrets-88caf0c8-e827-49e4-952d-196f3c56c48f container secret-volume-test: 
STEP: delete the pod
Dec 18 13:07:36.935: INFO: Waiting for pod pod-secrets-88caf0c8-e827-49e4-952d-196f3c56c48f to disappear
Dec 18 13:07:36.958: INFO: Pod pod-secrets-88caf0c8-e827-49e4-952d-196f3c56c48f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:07:36.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2626" for this suite.
Dec 18 13:07:42.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:07:43.150: INFO: namespace secrets-2626 deletion completed in 6.182607275s

• [SLOW TEST:14.677 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:07:43.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 18 13:07:43.342: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8986,SelfLink:/api/v1/namespaces/watch-8986/configmaps/e2e-watch-test-label-changed,UID:c5dfa565-40aa-4c81-9cfd-ac48c1fa625c,ResourceVersion:17135904,Generation:0,CreationTimestamp:2019-12-18 13:07:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 18 13:07:43.344: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8986,SelfLink:/api/v1/namespaces/watch-8986/configmaps/e2e-watch-test-label-changed,UID:c5dfa565-40aa-4c81-9cfd-ac48c1fa625c,ResourceVersion:17135905,Generation:0,CreationTimestamp:2019-12-18 13:07:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 18 13:07:43.344: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8986,SelfLink:/api/v1/namespaces/watch-8986/configmaps/e2e-watch-test-label-changed,UID:c5dfa565-40aa-4c81-9cfd-ac48c1fa625c,ResourceVersion:17135906,Generation:0,CreationTimestamp:2019-12-18 13:07:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 18 13:07:53.458: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8986,SelfLink:/api/v1/namespaces/watch-8986/configmaps/e2e-watch-test-label-changed,UID:c5dfa565-40aa-4c81-9cfd-ac48c1fa625c,ResourceVersion:17135921,Generation:0,CreationTimestamp:2019-12-18 13:07:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 18 13:07:53.459: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8986,SelfLink:/api/v1/namespaces/watch-8986/configmaps/e2e-watch-test-label-changed,UID:c5dfa565-40aa-4c81-9cfd-ac48c1fa625c,ResourceVersion:17135922,Generation:0,CreationTimestamp:2019-12-18 13:07:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 18 13:07:53.459: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8986,SelfLink:/api/v1/namespaces/watch-8986/configmaps/e2e-watch-test-label-changed,UID:c5dfa565-40aa-4c81-9cfd-ac48c1fa625c,ResourceVersion:17135923,Generation:0,CreationTimestamp:2019-12-18 13:07:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:07:53.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8986" for this suite.
Dec 18 13:07:59.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:07:59.662: INFO: namespace watch-8986 deletion completed in 6.175595214s

• [SLOW TEST:16.512 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:07:59.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3938
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 18 13:07:59.765: INFO: Found 0 stateful pods, waiting for 3
Dec 18 13:08:09.779: INFO: Found 2 stateful pods, waiting for 3
Dec 18 13:08:19.795: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:08:19.795: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:08:19.795: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 18 13:08:29.776: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:08:29.776: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:08:29.776: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:08:29.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 13:08:32.340: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 18 13:08:32.341: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 13:08:32.341: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 18 13:08:32.435: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 18 13:08:42.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 13:08:42.888: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 18 13:08:42.889: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 13:08:42.889: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 13:08:53.011: INFO: Waiting for StatefulSet statefulset-3938/ss2 to complete update
Dec 18 13:08:53.011: INFO: Waiting for Pod statefulset-3938/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 13:08:53.011: INFO: Waiting for Pod statefulset-3938/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 13:09:03.022: INFO: Waiting for StatefulSet statefulset-3938/ss2 to complete update
Dec 18 13:09:03.022: INFO: Waiting for Pod statefulset-3938/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 13:09:03.022: INFO: Waiting for Pod statefulset-3938/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 13:09:15.448: INFO: Waiting for StatefulSet statefulset-3938/ss2 to complete update
Dec 18 13:09:15.448: INFO: Waiting for Pod statefulset-3938/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 13:09:23.035: INFO: Waiting for StatefulSet statefulset-3938/ss2 to complete update
Dec 18 13:09:23.035: INFO: Waiting for Pod statefulset-3938/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 13:09:33.026: INFO: Waiting for StatefulSet statefulset-3938/ss2 to complete update
Dec 18 13:09:33.026: INFO: Waiting for Pod statefulset-3938/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 13:09:43.048: INFO: Waiting for StatefulSet statefulset-3938/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 18 13:09:53.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 13:09:53.525: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 18 13:09:53.525: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 13:09:53.525: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 13:10:03.601: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 18 13:10:13.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 13:10:14.099: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 18 13:10:14.100: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 13:10:14.100: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 13:10:24.144: INFO: Waiting for StatefulSet statefulset-3938/ss2 to complete update
Dec 18 13:10:24.145: INFO: Waiting for Pod statefulset-3938/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 13:10:24.145: INFO: Waiting for Pod statefulset-3938/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 13:10:34.217: INFO: Waiting for StatefulSet statefulset-3938/ss2 to complete update
Dec 18 13:10:34.217: INFO: Waiting for Pod statefulset-3938/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 13:10:34.217: INFO: Waiting for Pod statefulset-3938/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 13:10:44.166: INFO: Waiting for StatefulSet statefulset-3938/ss2 to complete update
Dec 18 13:10:44.166: INFO: Waiting for Pod statefulset-3938/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 13:10:44.166: INFO: Waiting for Pod statefulset-3938/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 13:10:55.352: INFO: Waiting for StatefulSet statefulset-3938/ss2 to complete update
Dec 18 13:10:55.352: INFO: Waiting for Pod statefulset-3938/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 13:11:04.159: INFO: Waiting for StatefulSet statefulset-3938/ss2 to complete update
Dec 18 13:11:04.159: INFO: Waiting for Pod statefulset-3938/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 18 13:11:14.163: INFO: Waiting for StatefulSet statefulset-3938/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 18 13:11:24.166: INFO: Deleting all statefulset in ns statefulset-3938
Dec 18 13:11:24.169: INFO: Scaling statefulset ss2 to 0
Dec 18 13:12:04.197: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 13:12:04.204: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:12:04.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3938" for this suite.
Dec 18 13:12:14.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:12:14.437: INFO: namespace statefulset-3938 deletion completed in 10.173476763s

• [SLOW TEST:254.775 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:12:14.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 18 13:12:14.645: INFO: Waiting up to 5m0s for pod "downward-api-2ae00b3f-66bf-4a01-a6b8-36ca6ade5995" in namespace "downward-api-2630" to be "success or failure"
Dec 18 13:12:14.751: INFO: Pod "downward-api-2ae00b3f-66bf-4a01-a6b8-36ca6ade5995": Phase="Pending", Reason="", readiness=false. Elapsed: 105.145491ms
Dec 18 13:12:16.759: INFO: Pod "downward-api-2ae00b3f-66bf-4a01-a6b8-36ca6ade5995": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113160055s
Dec 18 13:12:18.771: INFO: Pod "downward-api-2ae00b3f-66bf-4a01-a6b8-36ca6ade5995": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12582696s
Dec 18 13:12:20.780: INFO: Pod "downward-api-2ae00b3f-66bf-4a01-a6b8-36ca6ade5995": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13409044s
Dec 18 13:12:22.791: INFO: Pod "downward-api-2ae00b3f-66bf-4a01-a6b8-36ca6ade5995": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145394772s
Dec 18 13:12:24.803: INFO: Pod "downward-api-2ae00b3f-66bf-4a01-a6b8-36ca6ade5995": Phase="Pending", Reason="", readiness=false. Elapsed: 10.157947857s
Dec 18 13:12:26.823: INFO: Pod "downward-api-2ae00b3f-66bf-4a01-a6b8-36ca6ade5995": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.17736393s
STEP: Saw pod success
Dec 18 13:12:26.824: INFO: Pod "downward-api-2ae00b3f-66bf-4a01-a6b8-36ca6ade5995" satisfied condition "success or failure"
Dec 18 13:12:26.838: INFO: Trying to get logs from node iruya-node pod downward-api-2ae00b3f-66bf-4a01-a6b8-36ca6ade5995 container dapi-container: 
STEP: delete the pod
Dec 18 13:12:26.959: INFO: Waiting for pod downward-api-2ae00b3f-66bf-4a01-a6b8-36ca6ade5995 to disappear
Dec 18 13:12:26.975: INFO: Pod downward-api-2ae00b3f-66bf-4a01-a6b8-36ca6ade5995 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:12:26.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2630" for this suite.
Dec 18 13:12:33.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:12:33.350: INFO: namespace downward-api-2630 deletion completed in 6.308569025s

• [SLOW TEST:18.912 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:12:33.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-2270
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-2270
STEP: Deleting pre-stop pod
Dec 18 13:12:56.607: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:12:56.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-2270" for this suite.
Dec 18 13:13:34.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:13:34.832: INFO: namespace prestop-2270 deletion completed in 38.1658092s

• [SLOW TEST:61.483 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:13:34.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 18 13:13:45.844: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-2c3ffd8b-931e-44c6-8ba7-b2bc43d98b29,GenerateName:,Namespace:events-9158,SelfLink:/api/v1/namespaces/events-9158/pods/send-events-2c3ffd8b-931e-44c6-8ba7-b2bc43d98b29,UID:2e2aa703-3697-4307-a0f2-c5c474832e0c,ResourceVersion:17136812,Generation:0,CreationTimestamp:2019-12-18 13:13:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 775314299,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpxdd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpxdd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-lpxdd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202bb80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202bba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:13:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:13:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:13:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:13:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-18 13:13:35 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-18 13:13:42 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://e6d47a06c2425a26633ce52f4c431fb25cc5487b0ae34a35935a6aacef5baf09}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 18 13:13:47.859: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 18 13:13:49.877: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:13:49.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9158" for this suite.
Dec 18 13:14:30.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:14:30.258: INFO: namespace events-9158 deletion completed in 40.272507478s

• [SLOW TEST:55.424 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:14:30.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:14:40.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1572" for this suite.
Dec 18 13:15:32.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:15:32.823: INFO: namespace kubelet-test-1572 deletion completed in 52.130428067s

• [SLOW TEST:62.565 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:15:32.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:15:44.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8831" for this suite.
Dec 18 13:16:24.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:16:24.369: INFO: namespace replication-controller-8831 deletion completed in 40.267178026s

• [SLOW TEST:51.546 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:16:24.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-0f9164c6-42d0-45a8-868c-7874886a20d3
STEP: Creating secret with name s-test-opt-upd-6146adb3-35bf-4727-8f09-383438e431c6
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-0f9164c6-42d0-45a8-868c-7874886a20d3
STEP: Updating secret s-test-opt-upd-6146adb3-35bf-4727-8f09-383438e431c6
STEP: Creating secret with name s-test-opt-create-a8d3f0f3-3bd8-4bdd-9294-a770aa95b610
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:17:46.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7400" for this suite.
Dec 18 13:18:08.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:18:08.587: INFO: namespace projected-7400 deletion completed in 22.169348883s

• [SLOW TEST:104.216 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:18:08.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-85ea4830-59ef-41c0-9b6d-37c72f67c07d
STEP: Creating a pod to test consume secrets
Dec 18 13:18:08.776: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-56e9d307-a18d-46e6-90be-bb418ed3c465" in namespace "projected-6904" to be "success or failure"
Dec 18 13:18:08.946: INFO: Pod "pod-projected-secrets-56e9d307-a18d-46e6-90be-bb418ed3c465": Phase="Pending", Reason="", readiness=false. Elapsed: 169.435769ms
Dec 18 13:18:10.955: INFO: Pod "pod-projected-secrets-56e9d307-a18d-46e6-90be-bb418ed3c465": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177891682s
Dec 18 13:18:12.965: INFO: Pod "pod-projected-secrets-56e9d307-a18d-46e6-90be-bb418ed3c465": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187854223s
Dec 18 13:18:14.975: INFO: Pod "pod-projected-secrets-56e9d307-a18d-46e6-90be-bb418ed3c465": Phase="Pending", Reason="", readiness=false. Elapsed: 6.198271063s
Dec 18 13:18:16.988: INFO: Pod "pod-projected-secrets-56e9d307-a18d-46e6-90be-bb418ed3c465": Phase="Pending", Reason="", readiness=false. Elapsed: 8.210685016s
Dec 18 13:18:19.000: INFO: Pod "pod-projected-secrets-56e9d307-a18d-46e6-90be-bb418ed3c465": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.22341536s
STEP: Saw pod success
Dec 18 13:18:19.000: INFO: Pod "pod-projected-secrets-56e9d307-a18d-46e6-90be-bb418ed3c465" satisfied condition "success or failure"
Dec 18 13:18:19.007: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-56e9d307-a18d-46e6-90be-bb418ed3c465 container projected-secret-volume-test: 
STEP: delete the pod
Dec 18 13:18:19.396: INFO: Waiting for pod pod-projected-secrets-56e9d307-a18d-46e6-90be-bb418ed3c465 to disappear
Dec 18 13:18:19.528: INFO: Pod pod-projected-secrets-56e9d307-a18d-46e6-90be-bb418ed3c465 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:18:19.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6904" for this suite.
Dec 18 13:18:25.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:18:25.760: INFO: namespace projected-6904 deletion completed in 6.220784406s

• [SLOW TEST:17.172 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:18:25.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-bc03dcd4-992c-48c7-a06a-9296110c1a66
STEP: Creating a pod to test consume secrets
Dec 18 13:18:26.080: INFO: Waiting up to 5m0s for pod "pod-secrets-9b2adce2-a2d4-4789-8d70-7df063bbb801" in namespace "secrets-8475" to be "success or failure"
Dec 18 13:18:26.178: INFO: Pod "pod-secrets-9b2adce2-a2d4-4789-8d70-7df063bbb801": Phase="Pending", Reason="", readiness=false. Elapsed: 97.338742ms
Dec 18 13:18:28.189: INFO: Pod "pod-secrets-9b2adce2-a2d4-4789-8d70-7df063bbb801": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108468801s
Dec 18 13:18:30.197: INFO: Pod "pod-secrets-9b2adce2-a2d4-4789-8d70-7df063bbb801": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116844163s
Dec 18 13:18:32.208: INFO: Pod "pod-secrets-9b2adce2-a2d4-4789-8d70-7df063bbb801": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127641474s
Dec 18 13:18:34.215: INFO: Pod "pod-secrets-9b2adce2-a2d4-4789-8d70-7df063bbb801": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135057666s
Dec 18 13:18:36.225: INFO: Pod "pod-secrets-9b2adce2-a2d4-4789-8d70-7df063bbb801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.144666629s
STEP: Saw pod success
Dec 18 13:18:36.225: INFO: Pod "pod-secrets-9b2adce2-a2d4-4789-8d70-7df063bbb801" satisfied condition "success or failure"
Dec 18 13:18:36.231: INFO: Trying to get logs from node iruya-node pod pod-secrets-9b2adce2-a2d4-4789-8d70-7df063bbb801 container secret-volume-test: 
STEP: delete the pod
Dec 18 13:18:36.289: INFO: Waiting for pod pod-secrets-9b2adce2-a2d4-4789-8d70-7df063bbb801 to disappear
Dec 18 13:18:36.411: INFO: Pod pod-secrets-9b2adce2-a2d4-4789-8d70-7df063bbb801 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:18:36.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8475" for this suite.
Dec 18 13:18:42.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:18:42.713: INFO: namespace secrets-8475 deletion completed in 6.291261183s
STEP: Destroying namespace "secret-namespace-7405" for this suite.
Dec 18 13:18:48.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:18:48.962: INFO: namespace secret-namespace-7405 deletion completed in 6.2484765s

• [SLOW TEST:23.202 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:18:48.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 18 13:18:59.723: INFO: Successfully updated pod "pod-update-activedeadlineseconds-9314ab21-e1b0-4bdd-b741-b410cae844dc"
Dec 18 13:18:59.723: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-9314ab21-e1b0-4bdd-b741-b410cae844dc" in namespace "pods-3986" to be "terminated due to deadline exceeded"
Dec 18 13:18:59.783: INFO: Pod "pod-update-activedeadlineseconds-9314ab21-e1b0-4bdd-b741-b410cae844dc": Phase="Running", Reason="", readiness=true. Elapsed: 59.821353ms
Dec 18 13:19:01.803: INFO: Pod "pod-update-activedeadlineseconds-9314ab21-e1b0-4bdd-b741-b410cae844dc": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.07977081s
Dec 18 13:19:01.803: INFO: Pod "pod-update-activedeadlineseconds-9314ab21-e1b0-4bdd-b741-b410cae844dc" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:19:01.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3986" for this suite.
Dec 18 13:19:07.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:19:07.967: INFO: namespace pods-3986 deletion completed in 6.154066474s

• [SLOW TEST:19.005 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:19:07.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-108180f4-2b12-424f-bc77-e876d093affd
STEP: Creating a pod to test consume configMaps
Dec 18 13:19:08.159: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6c19777a-9967-4f57-a212-4c771d7b10a5" in namespace "projected-7442" to be "success or failure"
Dec 18 13:19:08.168: INFO: Pod "pod-projected-configmaps-6c19777a-9967-4f57-a212-4c771d7b10a5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.147638ms
Dec 18 13:19:10.176: INFO: Pod "pod-projected-configmaps-6c19777a-9967-4f57-a212-4c771d7b10a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017284725s
Dec 18 13:19:12.186: INFO: Pod "pod-projected-configmaps-6c19777a-9967-4f57-a212-4c771d7b10a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026891335s
Dec 18 13:19:14.200: INFO: Pod "pod-projected-configmaps-6c19777a-9967-4f57-a212-4c771d7b10a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041170601s
Dec 18 13:19:16.215: INFO: Pod "pod-projected-configmaps-6c19777a-9967-4f57-a212-4c771d7b10a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055670926s
STEP: Saw pod success
Dec 18 13:19:16.215: INFO: Pod "pod-projected-configmaps-6c19777a-9967-4f57-a212-4c771d7b10a5" satisfied condition "success or failure"
Dec 18 13:19:16.225: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-6c19777a-9967-4f57-a212-4c771d7b10a5 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 18 13:19:16.316: INFO: Waiting for pod pod-projected-configmaps-6c19777a-9967-4f57-a212-4c771d7b10a5 to disappear
Dec 18 13:19:16.322: INFO: Pod pod-projected-configmaps-6c19777a-9967-4f57-a212-4c771d7b10a5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:19:16.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7442" for this suite.
Dec 18 13:19:22.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:19:22.512: INFO: namespace projected-7442 deletion completed in 6.183051846s

• [SLOW TEST:14.544 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:19:22.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-wzdfs in namespace proxy-9433
I1218 13:19:22.747716       8 runners.go:180] Created replication controller with name: proxy-service-wzdfs, namespace: proxy-9433, replica count: 1
I1218 13:19:23.799564       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:19:24.800305       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:19:25.800852       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:19:26.801748       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:19:27.802569       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:19:28.803304       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:19:29.803885       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:19:30.804457       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:19:31.804972       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:19:32.805357       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1218 13:19:33.806104       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1218 13:19:34.806737       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1218 13:19:35.807213       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1218 13:19:36.807743       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1218 13:19:37.808117       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1218 13:19:38.808537       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1218 13:19:39.809015       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1218 13:19:40.809352       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1218 13:19:41.810517       8 runners.go:180] proxy-service-wzdfs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 18 13:19:41.818: INFO: setup took 19.213005496s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 18 13:19:41.860: INFO: (0) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname1/proxy/: foo (200; 40.843202ms)
Dec 18 13:19:41.860: INFO: (0) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8/proxy/: test (200; 40.948205ms)
Dec 18 13:19:41.860: INFO: (0) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname1/proxy/: foo (200; 41.455007ms)
Dec 18 13:19:41.860: INFO: (0) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 41.31891ms)
Dec 18 13:19:41.860: INFO: (0) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname2/proxy/: bar (200; 41.096362ms)
Dec 18 13:19:41.860: INFO: (0) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:1080/proxy/: ... (200; 41.596893ms)
Dec 18 13:19:41.861: INFO: (0) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 42.101632ms)
Dec 18 13:19:41.863: INFO: (0) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 44.570198ms)
Dec 18 13:19:41.864: INFO: (0) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 45.447197ms)
Dec 18 13:19:41.866: INFO: (0) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname2/proxy/: bar (200; 47.32034ms)
Dec 18 13:19:41.866: INFO: (0) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:1080/proxy/: test<... (200; 47.82941ms)
Dec 18 13:19:41.885: INFO: (0) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:462/proxy/: tls qux (200; 66.41772ms)
Dec 18 13:19:41.885: INFO: (0) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname1/proxy/: tls baz (200; 66.192784ms)
Dec 18 13:19:41.885: INFO: (0) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:460/proxy/: tls baz (200; 66.839517ms)
Dec 18 13:19:41.885: INFO: (0) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname2/proxy/: tls qux (200; 66.437224ms)
Dec 18 13:19:41.885: INFO: (0) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: ... (200; 25.32545ms)
Dec 18 13:19:41.912: INFO: (1) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:1080/proxy/: test<... (200; 25.523081ms)
Dec 18 13:19:41.912: INFO: (1) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 25.714786ms)
Dec 18 13:19:41.912: INFO: (1) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname1/proxy/: tls baz (200; 25.741278ms)
Dec 18 13:19:41.912: INFO: (1) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: test (200; 26.523895ms)
Dec 18 13:19:41.912: INFO: (1) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 26.073294ms)
Dec 18 13:19:41.912: INFO: (1) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 26.118906ms)
Dec 18 13:19:41.912: INFO: (1) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname1/proxy/: foo (200; 26.451442ms)
Dec 18 13:19:41.920: INFO: (2) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 7.583669ms)
Dec 18 13:19:41.930: INFO: (2) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname1/proxy/: foo (200; 16.23093ms)
Dec 18 13:19:41.930: INFO: (2) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:1080/proxy/: ... (200; 16.541849ms)
Dec 18 13:19:41.930: INFO: (2) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:460/proxy/: tls baz (200; 16.845227ms)
Dec 18 13:19:41.931: INFO: (2) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: test<... (200; 17.015666ms)
Dec 18 13:19:41.931: INFO: (2) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8/proxy/: test (200; 17.430425ms)
Dec 18 13:19:41.931: INFO: (2) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 17.51295ms)
Dec 18 13:19:41.931: INFO: (2) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname2/proxy/: bar (200; 17.809873ms)
Dec 18 13:19:41.931: INFO: (2) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname2/proxy/: tls qux (200; 17.713938ms)
Dec 18 13:19:41.942: INFO: (3) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:1080/proxy/: test<... (200; 11.331995ms)
Dec 18 13:19:41.942: INFO: (3) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname1/proxy/: foo (200; 11.290701ms)
Dec 18 13:19:41.943: INFO: (3) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8/proxy/: test (200; 11.867864ms)
Dec 18 13:19:41.943: INFO: (3) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname1/proxy/: foo (200; 12.073041ms)
Dec 18 13:19:41.943: INFO: (3) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:462/proxy/: tls qux (200; 12.073578ms)
Dec 18 13:19:41.943: INFO: (3) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname2/proxy/: tls qux (200; 11.973735ms)
Dec 18 13:19:41.944: INFO: (3) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname2/proxy/: bar (200; 12.935534ms)
Dec 18 13:19:41.944: INFO: (3) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname1/proxy/: tls baz (200; 12.866554ms)
Dec 18 13:19:41.944: INFO: (3) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 13.648683ms)
Dec 18 13:19:41.944: INFO: (3) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: ... (200; 13.600819ms)
Dec 18 13:19:41.945: INFO: (3) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname2/proxy/: bar (200; 13.559287ms)
Dec 18 13:19:41.945: INFO: (3) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 13.823294ms)
Dec 18 13:19:41.945: INFO: (3) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 14.31388ms)
Dec 18 13:19:41.946: INFO: (3) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:460/proxy/: tls baz (200; 15.596684ms)
Dec 18 13:19:41.954: INFO: (4) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:1080/proxy/: test<... (200; 7.179699ms)
Dec 18 13:19:41.954: INFO: (4) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:1080/proxy/: ... (200; 7.469397ms)
Dec 18 13:19:41.955: INFO: (4) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:460/proxy/: tls baz (200; 7.791998ms)
Dec 18 13:19:41.955: INFO: (4) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8/proxy/: test (200; 8.23608ms)
Dec 18 13:19:41.955: INFO: (4) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 8.446779ms)
Dec 18 13:19:41.955: INFO: (4) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 8.428743ms)
Dec 18 13:19:41.955: INFO: (4) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 8.655195ms)
Dec 18 13:19:41.955: INFO: (4) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 8.410882ms)
Dec 18 13:19:41.955: INFO: (4) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: test (200; 12.578246ms)
Dec 18 13:19:41.978: INFO: (5) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 13.855239ms)
Dec 18 13:19:41.978: INFO: (5) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:1080/proxy/: ... (200; 14.051121ms)
Dec 18 13:19:41.979: INFO: (5) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:462/proxy/: tls qux (200; 14.553337ms)
Dec 18 13:19:41.979: INFO: (5) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: test<... (200; 14.171912ms)
Dec 18 13:19:41.981: INFO: (5) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 17.095482ms)
Dec 18 13:19:41.983: INFO: (5) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname1/proxy/: tls baz (200; 18.743207ms)
Dec 18 13:19:41.983: INFO: (5) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 18.293654ms)
Dec 18 13:19:41.983: INFO: (5) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:460/proxy/: tls baz (200; 18.165487ms)
Dec 18 13:19:41.985: INFO: (5) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname1/proxy/: foo (200; 19.760712ms)
Dec 18 13:19:41.996: INFO: (6) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8/proxy/: test (200; 11.030878ms)
Dec 18 13:19:41.996: INFO: (6) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: ... (200; 14.703953ms)
Dec 18 13:19:42.000: INFO: (6) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname1/proxy/: foo (200; 14.456229ms)
Dec 18 13:19:42.000: INFO: (6) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname2/proxy/: tls qux (200; 14.479442ms)
Dec 18 13:19:42.000: INFO: (6) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:1080/proxy/: test<... (200; 14.696963ms)
Dec 18 13:19:42.000: INFO: (6) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:462/proxy/: tls qux (200; 15.02371ms)
Dec 18 13:19:42.001: INFO: (6) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname1/proxy/: tls baz (200; 14.966718ms)
Dec 18 13:19:42.013: INFO: (7) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:460/proxy/: tls baz (200; 11.76619ms)
Dec 18 13:19:42.016: INFO: (7) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 15.026058ms)
Dec 18 13:19:42.016: INFO: (7) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 15.162575ms)
Dec 18 13:19:42.017: INFO: (7) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8/proxy/: test (200; 15.751235ms)
Dec 18 13:19:42.018: INFO: (7) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname2/proxy/: bar (200; 16.775332ms)
Dec 18 13:19:42.018: INFO: (7) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: ... (200; 17.113555ms)
Dec 18 13:19:42.018: INFO: (7) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 17.3385ms)
Dec 18 13:19:42.020: INFO: (7) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname2/proxy/: bar (200; 19.099487ms)
Dec 18 13:19:42.020: INFO: (7) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname1/proxy/: tls baz (200; 19.223947ms)
Dec 18 13:19:42.020: INFO: (7) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname1/proxy/: foo (200; 19.057908ms)
Dec 18 13:19:42.020: INFO: (7) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:1080/proxy/: test<... (200; 19.371364ms)
Dec 18 13:19:42.021: INFO: (7) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname2/proxy/: tls qux (200; 19.831131ms)
Dec 18 13:19:42.021: INFO: (7) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname1/proxy/: foo (200; 19.725187ms)
Dec 18 13:19:42.022: INFO: (7) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 20.693312ms)
Dec 18 13:19:42.032: INFO: (8) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:1080/proxy/: ... (200; 9.896396ms)
Dec 18 13:19:42.032: INFO: (8) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 9.978786ms)
Dec 18 13:19:42.036: INFO: (8) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:1080/proxy/: test<... (200; 13.573983ms)
Dec 18 13:19:42.036: INFO: (8) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8/proxy/: test (200; 13.540175ms)
Dec 18 13:19:42.036: INFO: (8) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname2/proxy/: tls qux (200; 13.918321ms)
Dec 18 13:19:42.037: INFO: (8) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 15.1769ms)
Dec 18 13:19:42.038: INFO: (8) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname1/proxy/: foo (200; 15.485852ms)
Dec 18 13:19:42.038: INFO: (8) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:460/proxy/: tls baz (200; 15.676555ms)
Dec 18 13:19:42.038: INFO: (8) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname1/proxy/: tls baz (200; 16.19024ms)
Dec 18 13:19:42.038: INFO: (8) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname2/proxy/: bar (200; 16.167765ms)
Dec 18 13:19:42.039: INFO: (8) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 16.356166ms)
Dec 18 13:19:42.039: INFO: (8) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname2/proxy/: bar (200; 16.341139ms)
Dec 18 13:19:42.039: INFO: (8) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: ... (200; 13.106435ms)
Dec 18 13:19:42.053: INFO: (9) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8/proxy/: test (200; 12.804996ms)
Dec 18 13:19:42.053: INFO: (9) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: test<... (200; 13.814153ms)
Dec 18 13:19:42.054: INFO: (9) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 13.564152ms)
Dec 18 13:19:42.055: INFO: (9) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname1/proxy/: tls baz (200; 15.3217ms)
Dec 18 13:19:42.055: INFO: (9) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname1/proxy/: foo (200; 15.168551ms)
Dec 18 13:19:42.055: INFO: (9) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname2/proxy/: bar (200; 15.590229ms)
Dec 18 13:19:42.056: INFO: (9) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname1/proxy/: foo (200; 16.369407ms)
Dec 18 13:19:42.056: INFO: (9) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname2/proxy/: bar (200; 16.78487ms)
Dec 18 13:19:42.071: INFO: (10) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: ... (200; 14.22937ms)
Dec 18 13:19:42.071: INFO: (10) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:1080/proxy/: test<... (200; 14.202181ms)
Dec 18 13:19:42.071: INFO: (10) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 14.141894ms)
Dec 18 13:19:42.071: INFO: (10) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 14.701384ms)
Dec 18 13:19:42.071: INFO: (10) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 14.55045ms)
Dec 18 13:19:42.071: INFO: (10) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:460/proxy/: tls baz (200; 14.879436ms)
Dec 18 13:19:42.072: INFO: (10) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname1/proxy/: foo (200; 15.407772ms)
Dec 18 13:19:42.076: INFO: (10) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname2/proxy/: bar (200; 19.168685ms)
Dec 18 13:19:42.076: INFO: (10) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname1/proxy/: foo (200; 19.169992ms)
Dec 18 13:19:42.076: INFO: (10) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname2/proxy/: bar (200; 19.419135ms)
Dec 18 13:19:42.077: INFO: (10) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname1/proxy/: tls baz (200; 19.774132ms)
Dec 18 13:19:42.077: INFO: (10) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname2/proxy/: tls qux (200; 20.33408ms)
Dec 18 13:19:42.077: INFO: (10) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8/proxy/: test (200; 20.647733ms)
Dec 18 13:19:42.077: INFO: (10) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:462/proxy/: tls qux (200; 20.611754ms)
Dec 18 13:19:42.077: INFO: (10) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 20.784644ms)
Dec 18 13:19:42.090: INFO: (11) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 12.903612ms)
Dec 18 13:19:42.091: INFO: (11) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8/proxy/: test (200; 13.330467ms)
Dec 18 13:19:42.091: INFO: (11) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:1080/proxy/: ... (200; 13.362075ms)
Dec 18 13:19:42.091: INFO: (11) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: test<... (200; 15.431019ms)
Dec 18 13:19:42.093: INFO: (11) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 15.751526ms)
Dec 18 13:19:42.094: INFO: (11) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:460/proxy/: tls baz (200; 16.409586ms)
Dec 18 13:19:42.094: INFO: (11) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname1/proxy/: tls baz (200; 16.572638ms)
Dec 18 13:19:42.095: INFO: (11) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname1/proxy/: foo (200; 17.694492ms)
Dec 18 13:19:42.101: INFO: (11) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname1/proxy/: foo (200; 22.938079ms)
Dec 18 13:19:42.101: INFO: (11) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname2/proxy/: bar (200; 23.086974ms)
Dec 18 13:19:42.101: INFO: (11) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 23.230463ms)
Dec 18 13:19:42.101: INFO: (11) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname2/proxy/: tls qux (200; 23.667163ms)
Dec 18 13:19:42.107: INFO: (12) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:1080/proxy/: ... (200; 5.343751ms)
Dec 18 13:19:42.107: INFO: (12) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8/proxy/: test (200; 5.608594ms)
Dec 18 13:19:42.107: INFO: (12) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 5.872276ms)
Dec 18 13:19:42.108: INFO: (12) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: test<... (200; 10.703751ms)
Dec 18 13:19:42.113: INFO: (12) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname2/proxy/: bar (200; 11.29369ms)
Dec 18 13:19:42.113: INFO: (12) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:460/proxy/: tls baz (200; 11.35388ms)
Dec 18 13:19:42.113: INFO: (12) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname1/proxy/: tls baz (200; 11.309205ms)
Dec 18 13:19:42.114: INFO: (12) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname2/proxy/: tls qux (200; 12.525829ms)
Dec 18 13:19:42.121: INFO: (13) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8/proxy/: test (200; 6.52194ms)
Dec 18 13:19:42.121: INFO: (13) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:462/proxy/: tls qux (200; 7.002004ms)
Dec 18 13:19:42.121: INFO: (13) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:460/proxy/: tls baz (200; 7.167574ms)
Dec 18 13:19:42.131: INFO: (13) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname2/proxy/: bar (200; 16.657446ms)
Dec 18 13:19:42.131: INFO: (13) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname1/proxy/: foo (200; 16.666362ms)
Dec 18 13:19:42.131: INFO: (13) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname2/proxy/: tls qux (200; 17.109374ms)
Dec 18 13:19:42.131: INFO: (13) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 17.000656ms)
Dec 18 13:19:42.133: INFO: (13) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname2/proxy/: bar (200; 18.560287ms)
Dec 18 13:19:42.133: INFO: (13) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname1/proxy/: tls baz (200; 19.003708ms)
Dec 18 13:19:42.133: INFO: (13) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 18.914807ms)
Dec 18 13:19:42.134: INFO: (13) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:1080/proxy/: test<... (200; 19.216168ms)
Dec 18 13:19:42.134: INFO: (13) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: ... (200; 19.290836ms)
Dec 18 13:19:42.134: INFO: (13) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname1/proxy/: foo (200; 19.392667ms)
Dec 18 13:19:42.134: INFO: (13) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 19.518507ms)
Dec 18 13:19:42.134: INFO: (13) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 19.913388ms)
Dec 18 13:19:42.140: INFO: (14) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:1080/proxy/: ... (200; 5.344087ms)
Dec 18 13:19:42.143: INFO: (14) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8/proxy/: test (200; 8.393349ms)
Dec 18 13:19:42.143: INFO: (14) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:462/proxy/: tls qux (200; 8.976209ms)
Dec 18 13:19:42.143: INFO: (14) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:460/proxy/: tls baz (200; 8.688411ms)
Dec 18 13:19:42.147: INFO: (14) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname1/proxy/: foo (200; 12.033703ms)
Dec 18 13:19:42.147: INFO: (14) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname2/proxy/: bar (200; 12.036721ms)
Dec 18 13:19:42.147: INFO: (14) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname2/proxy/: bar (200; 12.448092ms)
Dec 18 13:19:42.147: INFO: (14) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:1080/proxy/: test<... (200; 12.434718ms)
Dec 18 13:19:42.147: INFO: (14) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname1/proxy/: foo (200; 12.747106ms)
Dec 18 13:19:42.147: INFO: (14) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname1/proxy/: tls baz (200; 13.13337ms)
Dec 18 13:19:42.148: INFO: (14) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname2/proxy/: tls qux (200; 13.178501ms)
Dec 18 13:19:42.148: INFO: (14) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 13.662069ms)
Dec 18 13:19:42.148: INFO: (14) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 13.370026ms)
Dec 18 13:19:42.148: INFO: (14) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: test<... (200; 4.986757ms)
Dec 18 13:19:42.154: INFO: (15) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:462/proxy/: tls qux (200; 5.238217ms)
Dec 18 13:19:42.154: INFO: (15) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 5.661768ms)
Dec 18 13:19:42.156: INFO: (15) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 6.756437ms)
Dec 18 13:19:42.156: INFO: (15) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8/proxy/: test (200; 6.687441ms)
Dec 18 13:19:42.157: INFO: (15) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname2/proxy/: bar (200; 8.196662ms)
Dec 18 13:19:42.157: INFO: (15) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname1/proxy/: foo (200; 8.358755ms)
Dec 18 13:19:42.157: INFO: (15) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:1080/proxy/: ... (200; 8.372663ms)
Dec 18 13:19:42.157: INFO: (15) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: test (200; 9.236418ms)
Dec 18 13:19:42.168: INFO: (16) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: test<... (200; 10.336654ms)
Dec 18 13:19:42.170: INFO: (16) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:462/proxy/: tls qux (200; 10.471498ms)
Dec 18 13:19:42.170: INFO: (16) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 10.269204ms)
Dec 18 13:19:42.170: INFO: (16) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname1/proxy/: foo (200; 11.283599ms)
Dec 18 13:19:42.170: INFO: (16) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:1080/proxy/: ... (200; 11.152034ms)
Dec 18 13:19:42.170: INFO: (16) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname2/proxy/: bar (200; 10.820569ms)
Dec 18 13:19:42.170: INFO: (16) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname1/proxy/: foo (200; 10.995042ms)
Dec 18 13:19:42.171: INFO: (16) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname2/proxy/: bar (200; 12.346777ms)
Dec 18 13:19:42.179: INFO: (17) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:1080/proxy/: test<... (200; 7.604755ms)
Dec 18 13:19:42.179: INFO: (17) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 8.184077ms)
Dec 18 13:19:42.180: INFO: (17) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: test (200; 8.48113ms)
Dec 18 13:19:42.180: INFO: (17) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:462/proxy/: tls qux (200; 8.439381ms)
Dec 18 13:19:42.180: INFO: (17) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname1/proxy/: foo (200; 8.787175ms)
Dec 18 13:19:42.180: INFO: (17) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:1080/proxy/: ... (200; 8.777294ms)
Dec 18 13:19:42.180: INFO: (17) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname1/proxy/: foo (200; 9.122015ms)
Dec 18 13:19:42.181: INFO: (17) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 9.753283ms)
Dec 18 13:19:42.181: INFO: (17) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname2/proxy/: tls qux (200; 9.697334ms)
Dec 18 13:19:42.181: INFO: (17) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 9.892521ms)
Dec 18 13:19:42.181: INFO: (17) /api/v1/namespaces/proxy-9433/services/http:proxy-service-wzdfs:portname2/proxy/: bar (200; 9.775732ms)
Dec 18 13:19:42.181: INFO: (17) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:460/proxy/: tls baz (200; 9.841893ms)
Dec 18 13:19:42.181: INFO: (17) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 9.962378ms)
Dec 18 13:19:42.181: INFO: (17) /api/v1/namespaces/proxy-9433/services/https:proxy-service-wzdfs:tlsportname1/proxy/: tls baz (200; 9.854892ms)
Dec 18 13:19:42.182: INFO: (17) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname2/proxy/: bar (200; 10.909718ms)
Dec 18 13:19:42.193: INFO: (18) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 10.832731ms)
Dec 18 13:19:42.193: INFO: (18) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 10.993527ms)
Dec 18 13:19:42.194: INFO: (18) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8/proxy/: test (200; 11.33548ms)
Dec 18 13:19:42.194: INFO: (18) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:462/proxy/: tls qux (200; 11.567546ms)
Dec 18 13:19:42.194: INFO: (18) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:460/proxy/: tls baz (200; 11.989656ms)
Dec 18 13:19:42.195: INFO: (18) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:1080/proxy/: ... (200; 12.389667ms)
Dec 18 13:19:42.195: INFO: (18) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:1080/proxy/: test<... (200; 12.501981ms)
Dec 18 13:19:42.195: INFO: (18) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: test (200; 4.210562ms)
Dec 18 13:19:42.203: INFO: (19) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:462/proxy/: tls qux (200; 5.327973ms)
Dec 18 13:19:42.208: INFO: (19) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:160/proxy/: foo (200; 9.523399ms)
Dec 18 13:19:42.208: INFO: (19) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 9.698307ms)
Dec 18 13:19:42.209: INFO: (19) /api/v1/namespaces/proxy-9433/services/proxy-service-wzdfs:portname2/proxy/: bar (200; 10.795146ms)
Dec 18 13:19:42.209: INFO: (19) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:1080/proxy/: test<... (200; 11.147032ms)
Dec 18 13:19:42.209: INFO: (19) /api/v1/namespaces/proxy-9433/pods/proxy-service-wzdfs-tg4c8:162/proxy/: bar (200; 11.208697ms)
Dec 18 13:19:42.209: INFO: (19) /api/v1/namespaces/proxy-9433/pods/http:proxy-service-wzdfs-tg4c8:1080/proxy/: ... (200; 11.281707ms)
Dec 18 13:19:42.209: INFO: (19) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:460/proxy/: tls baz (200; 11.585043ms)
Dec 18 13:19:42.209: INFO: (19) /api/v1/namespaces/proxy-9433/pods/https:proxy-service-wzdfs-tg4c8:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 13:20:03.106: INFO: Creating deployment "nginx-deployment"
Dec 18 13:20:03.116: INFO: Waiting for observed generation 1
Dec 18 13:20:05.434: INFO: Waiting for all required pods to come up
Dec 18 13:20:06.029: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 18 13:20:36.411: INFO: Waiting for deployment "nginx-deployment" to complete
Dec 18 13:20:36.420: INFO: Updating deployment "nginx-deployment" with a non-existent image
Dec 18 13:20:36.432: INFO: Updating deployment nginx-deployment
Dec 18 13:20:36.432: INFO: Waiting for observed generation 2
Dec 18 13:20:38.709: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 18 13:20:38.742: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 18 13:20:39.210: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 18 13:20:39.406: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 18 13:20:39.407: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 18 13:20:39.412: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Dec 18 13:20:39.422: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Dec 18 13:20:39.422: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Dec 18 13:20:39.436: INFO: Updating deployment nginx-deployment
Dec 18 13:20:39.436: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Dec 18 13:20:40.183: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 18 13:20:44.273: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 18 13:20:46.941: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6639,SelfLink:/apis/apps/v1/namespaces/deployment-6639/deployments/nginx-deployment,UID:4ab497b1-3af3-40eb-907a-cecdb48f1ba5,ResourceVersion:17137866,Generation:3,CreationTimestamp:2019-12-18 13:20:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2019-12-18 13:20:40 +0000 UTC 2019-12-18 13:20:40 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-18 13:20:43 +0000 UTC 2019-12-18 13:20:03 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Dec 18 13:20:49.407: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6639,SelfLink:/apis/apps/v1/namespaces/deployment-6639/replicasets/nginx-deployment-55fb7cb77f,UID:e1206501-3653-492d-8991-6bdd0d1798aa,ResourceVersion:17137861,Generation:3,CreationTimestamp:2019-12-18 13:20:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 4ab497b1-3af3-40eb-907a-cecdb48f1ba5 0xc002798107 0xc002798108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 18 13:20:49.407: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Dec 18 13:20:49.408: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6639,SelfLink:/apis/apps/v1/namespaces/deployment-6639/replicasets/nginx-deployment-7b8c6f4498,UID:90413ad1-20d1-4665-8071-2be229eff6ae,ResourceVersion:17137862,Generation:3,CreationTimestamp:2019-12-18 13:20:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 4ab497b1-3af3-40eb-907a-cecdb48f1ba5 0xc0027981d7 0xc0027981d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Dec 18 13:20:52.410: INFO: Pod "nginx-deployment-55fb7cb77f-6sh2j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6sh2j,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-55fb7cb77f-6sh2j,UID:e7f7b5ca-beb7-40d2-8ce0-3064ea655f35,ResourceVersion:17137836,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e1206501-3653-492d-8991-6bdd0d1798aa 0xc00206e0f7 0xc00206e0f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206e170} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206e190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.411: INFO: Pod "nginx-deployment-55fb7cb77f-9kzfk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9kzfk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-55fb7cb77f-9kzfk,UID:9a50c6ef-2f5c-4001-ac97-2c0e191ed06f,ResourceVersion:17137790,Generation:0,CreationTimestamp:2019-12-18 13:20:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e1206501-3653-492d-8991-6bdd0d1798aa 0xc00206e217 0xc00206e218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206e290} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206e2b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-18 13:20:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.411: INFO: Pod "nginx-deployment-55fb7cb77f-c8x86" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c8x86,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-55fb7cb77f-c8x86,UID:89db1de9-4276-4942-8e37-657e52601f5a,ResourceVersion:17137835,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e1206501-3653-492d-8991-6bdd0d1798aa 0xc00206e387 0xc00206e388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206e400} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206e420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.412: INFO: Pod "nginx-deployment-55fb7cb77f-d7f6l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-d7f6l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-55fb7cb77f-d7f6l,UID:605323ba-e275-4b3b-80f4-b5152879b989,ResourceVersion:17137791,Generation:0,CreationTimestamp:2019-12-18 13:20:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e1206501-3653-492d-8991-6bdd0d1798aa 0xc00206e4a7 0xc00206e4a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206e510} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206e530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-18 13:20:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.412: INFO: Pod "nginx-deployment-55fb7cb77f-hdslq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hdslq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-55fb7cb77f-hdslq,UID:4d42b21c-6e81-4068-8216-52cd06883c1b,ResourceVersion:17137878,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e1206501-3653-492d-8991-6bdd0d1798aa 0xc00206e607 0xc00206e608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206e670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206e690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-18 13:20:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.412: INFO: Pod "nginx-deployment-55fb7cb77f-ltrtz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ltrtz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-55fb7cb77f-ltrtz,UID:6e8fbe6d-89bf-4aee-878b-7bdf4d314f51,ResourceVersion:17137886,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e1206501-3653-492d-8991-6bdd0d1798aa 0xc00206e777 0xc00206e778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206e7f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206e810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-18 13:20:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.413: INFO: Pod "nginx-deployment-55fb7cb77f-m8ljf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m8ljf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-55fb7cb77f-m8ljf,UID:d2549bfe-6a50-41e0-b948-2e749673884d,ResourceVersion:17137775,Generation:0,CreationTimestamp:2019-12-18 13:20:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e1206501-3653-492d-8991-6bdd0d1798aa 0xc00206e8e7 0xc00206e8e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206e950} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206e970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-18 13:20:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.413: INFO: Pod "nginx-deployment-55fb7cb77f-sf5nq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sf5nq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-55fb7cb77f-sf5nq,UID:d566f0ff-4e7b-420c-b310-6c1dca011751,ResourceVersion:17137852,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e1206501-3653-492d-8991-6bdd0d1798aa 0xc00206ea47 0xc00206ea48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206eab0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206ead0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.413: INFO: Pod "nginx-deployment-55fb7cb77f-sppmh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sppmh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-55fb7cb77f-sppmh,UID:06f0d84b-9996-4878-8765-9d2b3898a62d,ResourceVersion:17137859,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e1206501-3653-492d-8991-6bdd0d1798aa 0xc00206eb57 0xc00206eb58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206ebc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206ebe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-18 13:20:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.414: INFO: Pod "nginx-deployment-55fb7cb77f-tgpbt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tgpbt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-55fb7cb77f-tgpbt,UID:83f849ba-e6d1-4117-8585-a379350a743c,ResourceVersion:17137800,Generation:0,CreationTimestamp:2019-12-18 13:20:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e1206501-3653-492d-8991-6bdd0d1798aa 0xc00206ecb7 0xc00206ecb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206ed30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206ed50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-18 13:20:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.414: INFO: Pod "nginx-deployment-55fb7cb77f-tjzn7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tjzn7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-55fb7cb77f-tjzn7,UID:6934a9ae-8b70-4af4-8a86-e5b514a4d467,ResourceVersion:17137837,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e1206501-3653-492d-8991-6bdd0d1798aa 0xc00206ee27 0xc00206ee28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206eea0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206eec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.415: INFO: Pod "nginx-deployment-55fb7cb77f-vg6gt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vg6gt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-55fb7cb77f-vg6gt,UID:d201877f-83d9-48cc-9a7b-8b0cf6a59ba1,ResourceVersion:17137834,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e1206501-3653-492d-8991-6bdd0d1798aa 0xc00206ef47 0xc00206ef48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206efb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206efd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.415: INFO: Pod "nginx-deployment-55fb7cb77f-zxh78" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zxh78,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-55fb7cb77f-zxh78,UID:c43eb156-60e8-4ac6-9e1a-59e9c2b50b84,ResourceVersion:17137797,Generation:0,CreationTimestamp:2019-12-18 13:20:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e1206501-3653-492d-8991-6bdd0d1798aa 0xc00206f057 0xc00206f058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206f0d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206f120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-18 13:20:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.416: INFO: Pod "nginx-deployment-7b8c6f4498-2lmzf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2lmzf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-2lmzf,UID:3c919080-46f9-4c04-a82b-16a733a3325a,ResourceVersion:17137698,Generation:0,CreationTimestamp:2019-12-18 13:20:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00206f1f7 0xc00206f1f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206f260} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206f280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2019-12-18 13:20:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 13:20:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://40bd6cb74772fc164c158e7f2b8739a8ff89024909cf604966771849e583c0ce}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.416: INFO: Pod "nginx-deployment-7b8c6f4498-4mjmd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4mjmd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-4mjmd,UID:c76c6e89-0153-423a-bb66-0144711aebc1,ResourceVersion:17137833,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00206f357 0xc00206f358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206f3d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206f3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.417: INFO: Pod "nginx-deployment-7b8c6f4498-4rvtp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4rvtp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-4rvtp,UID:a49b8d5a-4857-4c8e-a7c1-e7d0f6ea2b5f,ResourceVersion:17137850,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00206f477 0xc00206f478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206f4e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206f500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.417: INFO: Pod "nginx-deployment-7b8c6f4498-76nb2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-76nb2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-76nb2,UID:d67a910e-658b-4176-9d93-62d0546501e5,ResourceVersion:17137732,Generation:0,CreationTimestamp:2019-12-18 13:20:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00206f587 0xc00206f588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206f600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206f620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2019-12-18 13:20:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 13:20:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://98346443be6cdac4ff566662b7baed2ddfd773dfb7d8c4d69dab07c1b3d034dc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.417: INFO: Pod "nginx-deployment-7b8c6f4498-8vwc8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8vwc8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-8vwc8,UID:9e1dea12-3bed-44bf-a997-380a305f2d1a,ResourceVersion:17137723,Generation:0,CreationTimestamp:2019-12-18 13:20:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00206f707 0xc00206f708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206f780} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206f7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.6,StartTime:2019-12-18 13:20:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 13:20:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://431f229ace8367d63cc644eb921a461f44a4e985e8be333052883ee4b172ab1e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.417: INFO: Pod "nginx-deployment-7b8c6f4498-9nhtd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9nhtd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-9nhtd,UID:a5f973d8-486e-4650-8005-02672126b5c5,ResourceVersion:17137729,Generation:0,CreationTimestamp:2019-12-18 13:20:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00206f887 0xc00206f888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206f900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206f920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2019-12-18 13:20:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 13:20:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://659e62c05e66281fe01be87ce3e8b59dc4f3e414f063bea257eb0bff47c27260}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.418: INFO: Pod "nginx-deployment-7b8c6f4498-bsq7m" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bsq7m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-bsq7m,UID:5417ef8a-4ac7-4262-b63c-d1b24b637545,ResourceVersion:17137687,Generation:0,CreationTimestamp:2019-12-18 13:20:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00206fa07 0xc00206fa08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206fa70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206fa90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2019-12-18 13:20:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 13:20:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fddc740dd99f55c761954185e17a20012b5b3f8bb85c535a8d98fc1d7776d21b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.418: INFO: Pod "nginx-deployment-7b8c6f4498-bxkss" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bxkss,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-bxkss,UID:8fff9673-6774-45b3-8c6a-eaa7aff9ff1a,ResourceVersion:17137853,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00206fb67 0xc00206fb68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206fbe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206fc00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.418: INFO: Pod "nginx-deployment-7b8c6f4498-cwgxq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cwgxq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-cwgxq,UID:b51af8fc-7fec-4023-a72e-dc72e79f8226,ResourceVersion:17137854,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00206fc97 0xc00206fc98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206fd00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206fd20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.419: INFO: Pod "nginx-deployment-7b8c6f4498-ddhcf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ddhcf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-ddhcf,UID:0dfcd450-b5ce-4410-bc8d-4cb80a7c7f94,ResourceVersion:17137869,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00206fdb7 0xc00206fdb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206fe20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206fe40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-18 13:20:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.419: INFO: Pod "nginx-deployment-7b8c6f4498-g8h7r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g8h7r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-g8h7r,UID:b9fcec0a-618f-4c22-b861-3ca991bc80cb,ResourceVersion:17137832,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00206ff07 0xc00206ff08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00206ff80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00206ffa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.419: INFO: Pod "nginx-deployment-7b8c6f4498-gxcmt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gxcmt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-gxcmt,UID:2eb96a6f-1bce-4d8b-adbf-5f9b1cdc0230,ResourceVersion:17137838,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00273e027 0xc00273e028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00273e0a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00273e0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.419: INFO: Pod "nginx-deployment-7b8c6f4498-j9dnq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j9dnq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-j9dnq,UID:7442ed55-5bb3-4222-baa9-748e27de5beb,ResourceVersion:17137867,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00273e147 0xc00273e148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00273e1d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00273e1f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-18 13:20:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.420: INFO: Pod "nginx-deployment-7b8c6f4498-jgj54" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jgj54,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-jgj54,UID:e0c671f2-6207-43ab-b17c-c60780f85b75,ResourceVersion:17137706,Generation:0,CreationTimestamp:2019-12-18 13:20:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00273e2b7 0xc00273e2b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00273e320} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00273e340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2019-12-18 13:20:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 13:20:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://676fe7669c2772103ae55d04eef27a807bef9836346ec5dba93ac7e537e103ff}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.420: INFO: Pod "nginx-deployment-7b8c6f4498-ljcmd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ljcmd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-ljcmd,UID:cb7d880a-3ad1-4f1f-8107-10e6d50ce85b,ResourceVersion:17137704,Generation:0,CreationTimestamp:2019-12-18 13:20:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00273e427 0xc00273e428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00273e490} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00273e4b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2019-12-18 13:20:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 13:20:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2309c3fc8684d4dacc5860f89a4efc569cd0903d5253e113201df6ceee77b98d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.420: INFO: Pod "nginx-deployment-7b8c6f4498-lqpps" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lqpps,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-lqpps,UID:22088f25-2c53-44db-8544-d8c355cfdcd4,ResourceVersion:17137851,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00273e587 0xc00273e588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00273e610} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00273e630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.420: INFO: Pod "nginx-deployment-7b8c6f4498-mt2jc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mt2jc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-mt2jc,UID:5a1957cc-a737-4310-bee1-be33bb328e9a,ResourceVersion:17137856,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00273e6b7 0xc00273e6b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00273e720} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00273e740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.420: INFO: Pod "nginx-deployment-7b8c6f4498-p6xcp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p6xcp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-p6xcp,UID:345ecb53-0d4b-4327-8232-d89dc4b58607,ResourceVersion:17137872,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00273e7d7 0xc00273e7d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00273e850} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00273e870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-18 13:20:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.421: INFO: Pod "nginx-deployment-7b8c6f4498-rlmq2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rlmq2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-rlmq2,UID:d27ae6f2-1560-4656-8d88-485a191e7233,ResourceVersion:17137726,Generation:0,CreationTimestamp:2019-12-18 13:20:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00273e937 0xc00273e938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00273e9b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00273e9d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-18 13:20:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 13:20:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d59c2bffbb7665765a4b47e656691bf0420b4697ef97fcbf44017dee4a108281}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Dec 18 13:20:52.421: INFO: Pod "nginx-deployment-7b8c6f4498-x8scw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x8scw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6639,SelfLink:/api/v1/namespaces/deployment-6639/pods/nginx-deployment-7b8c6f4498-x8scw,UID:6cfa5e82-7f2c-46c6-866a-588018ea6d66,ResourceVersion:17137839,Generation:0,CreationTimestamp:2019-12-18 13:20:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 90413ad1-20d1-4665-8071-2be229eff6ae 0xc00273eaa7 0xc00273eaa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdhln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdhln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdhln true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00273eb20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00273eb40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:20:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:20:52.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6639" for this suite.
Dec 18 13:21:44.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:21:44.362: INFO: namespace deployment-6639 deletion completed in 49.906618653s

• [SLOW TEST:101.309 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:21:44.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 18 13:21:53.163: INFO: Successfully updated pod "annotationupdate1959d743-bde5-45b9-bf6d-87c2fa0ff4f7"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:21:57.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9228" for this suite.
Dec 18 13:22:21.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:22:21.588: INFO: namespace projected-9228 deletion completed in 24.230640398s

• [SLOW TEST:37.224 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:22:21.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-0eaf66f1-3306-4393-aeca-a6d732e4096f
STEP: Creating secret with name s-test-opt-upd-a17be6e9-c2a1-430a-87bb-16e25a7acaa3
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-0eaf66f1-3306-4393-aeca-a6d732e4096f
STEP: Updating secret s-test-opt-upd-a17be6e9-c2a1-430a-87bb-16e25a7acaa3
STEP: Creating secret with name s-test-opt-create-5aba9c2a-4907-4245-9a8e-a0fb9069d593
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:22:36.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5966" for this suite.
Dec 18 13:22:58.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:22:58.396: INFO: namespace secrets-5966 deletion completed in 22.194419314s

• [SLOW TEST:36.808 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:22:58.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-2730d3e4-6e01-45fa-9e5a-a0bd49c307fd
STEP: Creating a pod to test consume configMaps
Dec 18 13:22:58.569: INFO: Waiting up to 5m0s for pod "pod-configmaps-031908eb-b748-476d-b7d5-1b80e3a947de" in namespace "configmap-1097" to be "success or failure"
Dec 18 13:22:58.582: INFO: Pod "pod-configmaps-031908eb-b748-476d-b7d5-1b80e3a947de": Phase="Pending", Reason="", readiness=false. Elapsed: 12.303772ms
Dec 18 13:23:00.610: INFO: Pod "pod-configmaps-031908eb-b748-476d-b7d5-1b80e3a947de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04088517s
Dec 18 13:23:02.626: INFO: Pod "pod-configmaps-031908eb-b748-476d-b7d5-1b80e3a947de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057034347s
Dec 18 13:23:04.636: INFO: Pod "pod-configmaps-031908eb-b748-476d-b7d5-1b80e3a947de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066322091s
Dec 18 13:23:06.659: INFO: Pod "pod-configmaps-031908eb-b748-476d-b7d5-1b80e3a947de": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090057731s
Dec 18 13:23:08.674: INFO: Pod "pod-configmaps-031908eb-b748-476d-b7d5-1b80e3a947de": Phase="Pending", Reason="", readiness=false. Elapsed: 10.104879128s
Dec 18 13:23:10.682: INFO: Pod "pod-configmaps-031908eb-b748-476d-b7d5-1b80e3a947de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.112108135s
STEP: Saw pod success
Dec 18 13:23:10.682: INFO: Pod "pod-configmaps-031908eb-b748-476d-b7d5-1b80e3a947de" satisfied condition "success or failure"
Dec 18 13:23:10.685: INFO: Trying to get logs from node iruya-node pod pod-configmaps-031908eb-b748-476d-b7d5-1b80e3a947de container configmap-volume-test: 
STEP: delete the pod
Dec 18 13:23:10.741: INFO: Waiting for pod pod-configmaps-031908eb-b748-476d-b7d5-1b80e3a947de to disappear
Dec 18 13:23:10.888: INFO: Pod pod-configmaps-031908eb-b748-476d-b7d5-1b80e3a947de no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:23:10.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1097" for this suite.
Dec 18 13:23:16.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:23:17.037: INFO: namespace configmap-1097 deletion completed in 6.143839431s

• [SLOW TEST:18.639 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:23:17.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 13:23:17.129: INFO: Creating deployment "test-recreate-deployment"
Dec 18 13:23:17.182: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 18 13:23:17.209: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Dec 18 13:23:19.236: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 18 13:23:19.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:23:21.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:23:23.249: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:23:25.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272197, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:23:27.256: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 18 13:23:27.272: INFO: Updating deployment test-recreate-deployment
Dec 18 13:23:27.272: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 18 13:23:27.754: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-2046,SelfLink:/apis/apps/v1/namespaces/deployment-2046/deployments/test-recreate-deployment,UID:e46a8ae5-1560-41df-8fbf-246032aa0c7b,ResourceVersion:17138417,Generation:2,CreationTimestamp:2019-12-18 13:23:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-18 13:23:27 +0000 UTC 2019-12-18 13:23:27 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-18 13:23:27 +0000 UTC 2019-12-18 13:23:17 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 18 13:23:27.764: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-2046,SelfLink:/apis/apps/v1/namespaces/deployment-2046/replicasets/test-recreate-deployment-5c8c9cc69d,UID:c9709156-2ab1-403e-993e-43d7d4654809,ResourceVersion:17138416,Generation:1,CreationTimestamp:2019-12-18 13:23:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e46a8ae5-1560-41df-8fbf-246032aa0c7b 0xc000a4af47 0xc000a4af48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 18 13:23:27.764: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 18 13:23:27.765: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-2046,SelfLink:/apis/apps/v1/namespaces/deployment-2046/replicasets/test-recreate-deployment-6df85df6b9,UID:38a18167-c81a-42d3-9a2d-1a1dc5abb0c6,ResourceVersion:17138405,Generation:2,CreationTimestamp:2019-12-18 13:23:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e46a8ae5-1560-41df-8fbf-246032aa0c7b 0xc000a4b027 0xc000a4b028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 18 13:23:27.792: INFO: Pod "test-recreate-deployment-5c8c9cc69d-g4rd2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-g4rd2,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-2046,SelfLink:/api/v1/namespaces/deployment-2046/pods/test-recreate-deployment-5c8c9cc69d-g4rd2,UID:70787e50-5163-420a-ab79-7393f15eafed,ResourceVersion:17138418,Generation:0,CreationTimestamp:2019-12-18 13:23:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d c9709156-2ab1-403e-993e-43d7d4654809 0xc001fd0567 0xc001fd0568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hwj2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hwj2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hwj2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fd05e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fd0600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:23:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:23:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:23:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:23:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-18 13:23:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:23:27.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2046" for this suite.
Dec 18 13:23:33.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:23:33.989: INFO: namespace deployment-2046 deletion completed in 6.192072661s

• [SLOW TEST:16.952 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:23:33.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 13:23:34.217: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1661964e-2077-4d58-8ef2-36aad164bac7" in namespace "downward-api-8513" to be "success or failure"
Dec 18 13:23:34.224: INFO: Pod "downwardapi-volume-1661964e-2077-4d58-8ef2-36aad164bac7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.07556ms
Dec 18 13:23:36.232: INFO: Pod "downwardapi-volume-1661964e-2077-4d58-8ef2-36aad164bac7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014914545s
Dec 18 13:23:38.250: INFO: Pod "downwardapi-volume-1661964e-2077-4d58-8ef2-36aad164bac7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032366652s
Dec 18 13:23:40.258: INFO: Pod "downwardapi-volume-1661964e-2077-4d58-8ef2-36aad164bac7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040900015s
Dec 18 13:23:42.266: INFO: Pod "downwardapi-volume-1661964e-2077-4d58-8ef2-36aad164bac7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04829295s
Dec 18 13:23:44.281: INFO: Pod "downwardapi-volume-1661964e-2077-4d58-8ef2-36aad164bac7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063964595s
STEP: Saw pod success
Dec 18 13:23:44.281: INFO: Pod "downwardapi-volume-1661964e-2077-4d58-8ef2-36aad164bac7" satisfied condition "success or failure"
Dec 18 13:23:44.287: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1661964e-2077-4d58-8ef2-36aad164bac7 container client-container: 
STEP: delete the pod
Dec 18 13:23:44.455: INFO: Waiting for pod downwardapi-volume-1661964e-2077-4d58-8ef2-36aad164bac7 to disappear
Dec 18 13:23:44.539: INFO: Pod downwardapi-volume-1661964e-2077-4d58-8ef2-36aad164bac7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:23:44.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8513" for this suite.
Dec 18 13:23:50.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:23:50.794: INFO: namespace downward-api-8513 deletion completed in 6.243550386s

• [SLOW TEST:16.803 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:23:50.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 18 13:23:50.969: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 18 13:23:51.011: INFO: Waiting for terminating namespaces to be deleted...
Dec 18 13:23:51.018: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 18 13:23:51.047: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 18 13:23:51.048: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 18 13:23:51.048: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 18 13:23:51.048: INFO: 	Container weave ready: true, restart count 0
Dec 18 13:23:51.048: INFO: 	Container weave-npc ready: true, restart count 0
Dec 18 13:23:51.048: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 18 13:23:51.080: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 18 13:23:51.081: INFO: 	Container etcd ready: true, restart count 0
Dec 18 13:23:51.081: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 18 13:23:51.081: INFO: 	Container weave ready: true, restart count 0
Dec 18 13:23:51.081: INFO: 	Container weave-npc ready: true, restart count 0
Dec 18 13:23:51.081: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 18 13:23:51.081: INFO: 	Container coredns ready: true, restart count 0
Dec 18 13:23:51.081: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 18 13:23:51.081: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 18 13:23:51.081: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 18 13:23:51.081: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 18 13:23:51.081: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 18 13:23:51.081: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 18 13:23:51.081: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 18 13:23:51.081: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 18 13:23:51.081: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 18 13:23:51.081: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Dec 18 13:23:51.206: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 18 13:23:51.206: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 18 13:23:51.206: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 18 13:23:51.206: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Dec 18 13:23:51.206: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Dec 18 13:23:51.206: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 18 13:23:51.206: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Dec 18 13:23:51.206: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 18 13:23:51.206: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Dec 18 13:23:51.206: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-36373d84-fba6-4306-9ab5-e919bef5454c.15e179f3842cbfb7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1673/filler-pod-36373d84-fba6-4306-9ab5-e919bef5454c to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-36373d84-fba6-4306-9ab5-e919bef5454c.15e179f5552687a8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-36373d84-fba6-4306-9ab5-e919bef5454c.15e179f616c978c6], Reason = [Created], Message = [Created container filler-pod-36373d84-fba6-4306-9ab5-e919bef5454c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-36373d84-fba6-4306-9ab5-e919bef5454c.15e179f63a6f7929], Reason = [Started], Message = [Started container filler-pod-36373d84-fba6-4306-9ab5-e919bef5454c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c15c1928-5c0a-48b5-8e9c-30b528bbcaba.15e179f38587c7eb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1673/filler-pod-c15c1928-5c0a-48b5-8e9c-30b528bbcaba to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c15c1928-5c0a-48b5-8e9c-30b528bbcaba.15e179f4d1a28455], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c15c1928-5c0a-48b5-8e9c-30b528bbcaba.15e179f54af41b1b], Reason = [Created], Message = [Created container filler-pod-c15c1928-5c0a-48b5-8e9c-30b528bbcaba]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c15c1928-5c0a-48b5-8e9c-30b528bbcaba.15e179f56d19b025], Reason = [Started], Message = [Started container filler-pod-c15c1928-5c0a-48b5-8e9c-30b528bbcaba]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e179f6ccba6bae], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:24:06.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1673" for this suite.
Dec 18 13:24:14.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:24:14.892: INFO: namespace sched-pred-1673 deletion completed in 8.131070555s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:24.094 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:24:14.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 18 13:24:17.141: INFO: Waiting up to 5m0s for pod "pod-9ee8efc0-9c8d-4b4e-886d-4fda647fc6e0" in namespace "emptydir-1293" to be "success or failure"
Dec 18 13:24:17.279: INFO: Pod "pod-9ee8efc0-9c8d-4b4e-886d-4fda647fc6e0": Phase="Pending", Reason="", readiness=false. Elapsed: 138.193978ms
Dec 18 13:24:19.292: INFO: Pod "pod-9ee8efc0-9c8d-4b4e-886d-4fda647fc6e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150452832s
Dec 18 13:24:21.303: INFO: Pod "pod-9ee8efc0-9c8d-4b4e-886d-4fda647fc6e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161377086s
Dec 18 13:24:23.317: INFO: Pod "pod-9ee8efc0-9c8d-4b4e-886d-4fda647fc6e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.175557199s
Dec 18 13:24:25.326: INFO: Pod "pod-9ee8efc0-9c8d-4b4e-886d-4fda647fc6e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.184509001s
STEP: Saw pod success
Dec 18 13:24:25.326: INFO: Pod "pod-9ee8efc0-9c8d-4b4e-886d-4fda647fc6e0" satisfied condition "success or failure"
Dec 18 13:24:25.330: INFO: Trying to get logs from node iruya-node pod pod-9ee8efc0-9c8d-4b4e-886d-4fda647fc6e0 container test-container: 
STEP: delete the pod
Dec 18 13:24:25.386: INFO: Waiting for pod pod-9ee8efc0-9c8d-4b4e-886d-4fda647fc6e0 to disappear
Dec 18 13:24:25.394: INFO: Pod pod-9ee8efc0-9c8d-4b4e-886d-4fda647fc6e0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:24:25.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1293" for this suite.
Dec 18 13:24:31.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:24:31.645: INFO: namespace emptydir-1293 deletion completed in 6.2351459s

• [SLOW TEST:16.753 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:24:31.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Dec 18 13:24:38.305: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2386 pod-service-account-1ccb25b2-f185-4588-b494-7bf8780ac25b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Dec 18 13:24:40.795: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2386 pod-service-account-1ccb25b2-f185-4588-b494-7bf8780ac25b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Dec 18 13:24:41.307: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2386 pod-service-account-1ccb25b2-f185-4588-b494-7bf8780ac25b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:24:41.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2386" for this suite.
Dec 18 13:24:49.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:24:49.944: INFO: namespace svcaccounts-2386 deletion completed in 8.198558764s

• [SLOW TEST:18.298 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:24:49.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-a5a57d80-632a-42e7-b5cb-93a9feffa366
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:25:06.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6158" for this suite.
Dec 18 13:25:28.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:25:28.477: INFO: namespace configmap-6158 deletion completed in 22.203537741s

• [SLOW TEST:38.533 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:25:28.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 18 13:25:28.652: INFO: PodSpec: initContainers in spec.initContainers
Dec 18 13:26:33.837: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-556e98c2-55ba-4ce0-a125-9bda70e87b09", GenerateName:"", Namespace:"init-container-7494", SelfLink:"/api/v1/namespaces/init-container-7494/pods/pod-init-556e98c2-55ba-4ce0-a125-9bda70e87b09", UID:"ccd6f0fe-a6c4-4867-8c13-73254d6d45d2", ResourceVersion:"17138871", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712272328, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"652928381"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-bwvgx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001a7f7c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bwvgx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bwvgx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bwvgx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f245e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d693e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f24670)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f24690)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001f24698), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001f2469c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272328, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272328, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272328, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712272328, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc000c47880), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002135500)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002135570)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://94ba52a3365a0a7cd1804219955266d408f7b3bdf9efeade5852474be9cacfc5"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000c47be0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000c47b20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:26:33.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7494" for this suite.
Dec 18 13:26:55.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:26:56.073: INFO: namespace init-container-7494 deletion completed in 22.200576153s

• [SLOW TEST:87.594 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:26:56.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-2550ea37-18d6-4d41-bc92-aaa0ec8670f8
STEP: Creating a pod to test consume secrets
Dec 18 13:26:56.217: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2a3658bb-debc-4fe8-92cc-069d9f601b97" in namespace "projected-5143" to be "success or failure"
Dec 18 13:26:56.229: INFO: Pod "pod-projected-secrets-2a3658bb-debc-4fe8-92cc-069d9f601b97": Phase="Pending", Reason="", readiness=false. Elapsed: 11.996025ms
Dec 18 13:26:58.241: INFO: Pod "pod-projected-secrets-2a3658bb-debc-4fe8-92cc-069d9f601b97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023987824s
Dec 18 13:27:00.249: INFO: Pod "pod-projected-secrets-2a3658bb-debc-4fe8-92cc-069d9f601b97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032599942s
Dec 18 13:27:02.300: INFO: Pod "pod-projected-secrets-2a3658bb-debc-4fe8-92cc-069d9f601b97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083693753s
Dec 18 13:27:04.308: INFO: Pod "pod-projected-secrets-2a3658bb-debc-4fe8-92cc-069d9f601b97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091643165s
STEP: Saw pod success
Dec 18 13:27:04.308: INFO: Pod "pod-projected-secrets-2a3658bb-debc-4fe8-92cc-069d9f601b97" satisfied condition "success or failure"
Dec 18 13:27:04.312: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-2a3658bb-debc-4fe8-92cc-069d9f601b97 container projected-secret-volume-test: 
STEP: delete the pod
Dec 18 13:27:04.366: INFO: Waiting for pod pod-projected-secrets-2a3658bb-debc-4fe8-92cc-069d9f601b97 to disappear
Dec 18 13:27:04.402: INFO: Pod pod-projected-secrets-2a3658bb-debc-4fe8-92cc-069d9f601b97 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:27:04.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5143" for this suite.
Dec 18 13:27:10.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:27:10.657: INFO: namespace projected-5143 deletion completed in 6.247313833s

• [SLOW TEST:14.585 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:27:10.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 18 13:27:19.935: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:27:20.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5011" for this suite.
Dec 18 13:27:26.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:27:26.258: INFO: namespace container-runtime-5011 deletion completed in 6.200595307s

• [SLOW TEST:15.598 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:27:26.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-380/configmap-test-67b50d7e-07e7-4697-b7d2-02c179e0eb49
STEP: Creating a pod to test consume configMaps
Dec 18 13:27:26.421: INFO: Waiting up to 5m0s for pod "pod-configmaps-bc0245c5-477d-4ffe-b256-1fb9b08f890c" in namespace "configmap-380" to be "success or failure"
Dec 18 13:27:26.490: INFO: Pod "pod-configmaps-bc0245c5-477d-4ffe-b256-1fb9b08f890c": Phase="Pending", Reason="", readiness=false. Elapsed: 69.086365ms
Dec 18 13:27:28.508: INFO: Pod "pod-configmaps-bc0245c5-477d-4ffe-b256-1fb9b08f890c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087239655s
Dec 18 13:27:30.524: INFO: Pod "pod-configmaps-bc0245c5-477d-4ffe-b256-1fb9b08f890c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10274698s
Dec 18 13:27:32.541: INFO: Pod "pod-configmaps-bc0245c5-477d-4ffe-b256-1fb9b08f890c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120467471s
Dec 18 13:27:34.558: INFO: Pod "pod-configmaps-bc0245c5-477d-4ffe-b256-1fb9b08f890c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.137095145s
STEP: Saw pod success
Dec 18 13:27:34.559: INFO: Pod "pod-configmaps-bc0245c5-477d-4ffe-b256-1fb9b08f890c" satisfied condition "success or failure"
Dec 18 13:27:34.564: INFO: Trying to get logs from node iruya-node pod pod-configmaps-bc0245c5-477d-4ffe-b256-1fb9b08f890c container env-test: 
STEP: delete the pod
Dec 18 13:27:34.754: INFO: Waiting for pod pod-configmaps-bc0245c5-477d-4ffe-b256-1fb9b08f890c to disappear
Dec 18 13:27:34.770: INFO: Pod pod-configmaps-bc0245c5-477d-4ffe-b256-1fb9b08f890c no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:27:34.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-380" for this suite.
Dec 18 13:27:40.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:27:41.000: INFO: namespace configmap-380 deletion completed in 6.217773805s

• [SLOW TEST:14.741 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:27:41.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 13:27:41.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:27:49.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5623" for this suite.
Dec 18 13:28:41.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:28:42.116: INFO: namespace pods-5623 deletion completed in 52.616898699s

• [SLOW TEST:61.115 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:28:42.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 18 13:28:42.307: INFO: Waiting up to 5m0s for pod "pod-c83515d9-cd1b-4024-ba7b-38eca26b9a1f" in namespace "emptydir-2580" to be "success or failure"
Dec 18 13:28:42.319: INFO: Pod "pod-c83515d9-cd1b-4024-ba7b-38eca26b9a1f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.86502ms
Dec 18 13:28:44.331: INFO: Pod "pod-c83515d9-cd1b-4024-ba7b-38eca26b9a1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023794361s
Dec 18 13:28:46.341: INFO: Pod "pod-c83515d9-cd1b-4024-ba7b-38eca26b9a1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033517347s
Dec 18 13:28:48.354: INFO: Pod "pod-c83515d9-cd1b-4024-ba7b-38eca26b9a1f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046962981s
Dec 18 13:28:50.364: INFO: Pod "pod-c83515d9-cd1b-4024-ba7b-38eca26b9a1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056778536s
STEP: Saw pod success
Dec 18 13:28:50.364: INFO: Pod "pod-c83515d9-cd1b-4024-ba7b-38eca26b9a1f" satisfied condition "success or failure"
Dec 18 13:28:50.368: INFO: Trying to get logs from node iruya-node pod pod-c83515d9-cd1b-4024-ba7b-38eca26b9a1f container test-container: 
STEP: delete the pod
Dec 18 13:28:50.460: INFO: Waiting for pod pod-c83515d9-cd1b-4024-ba7b-38eca26b9a1f to disappear
Dec 18 13:28:50.515: INFO: Pod pod-c83515d9-cd1b-4024-ba7b-38eca26b9a1f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:28:50.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2580" for this suite.
Dec 18 13:28:58.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:28:58.703: INFO: namespace emptydir-2580 deletion completed in 8.177837877s

• [SLOW TEST:16.586 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:28:58.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-4229/configmap-test-bc6ddffe-8e9a-4a7f-ac6f-9561fe50d4bf
STEP: Creating a pod to test consume configMaps
Dec 18 13:28:58.854: INFO: Waiting up to 5m0s for pod "pod-configmaps-deb6344f-3747-4150-9281-8cddd5bf5845" in namespace "configmap-4229" to be "success or failure"
Dec 18 13:28:58.864: INFO: Pod "pod-configmaps-deb6344f-3747-4150-9281-8cddd5bf5845": Phase="Pending", Reason="", readiness=false. Elapsed: 10.2918ms
Dec 18 13:29:00.895: INFO: Pod "pod-configmaps-deb6344f-3747-4150-9281-8cddd5bf5845": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041354605s
Dec 18 13:29:02.908: INFO: Pod "pod-configmaps-deb6344f-3747-4150-9281-8cddd5bf5845": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054105712s
Dec 18 13:29:04.915: INFO: Pod "pod-configmaps-deb6344f-3747-4150-9281-8cddd5bf5845": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061414566s
Dec 18 13:29:06.925: INFO: Pod "pod-configmaps-deb6344f-3747-4150-9281-8cddd5bf5845": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070967745s
STEP: Saw pod success
Dec 18 13:29:06.925: INFO: Pod "pod-configmaps-deb6344f-3747-4150-9281-8cddd5bf5845" satisfied condition "success or failure"
Dec 18 13:29:06.930: INFO: Trying to get logs from node iruya-node pod pod-configmaps-deb6344f-3747-4150-9281-8cddd5bf5845 container env-test: 
STEP: delete the pod
Dec 18 13:29:07.012: INFO: Waiting for pod pod-configmaps-deb6344f-3747-4150-9281-8cddd5bf5845 to disappear
Dec 18 13:29:07.018: INFO: Pod pod-configmaps-deb6344f-3747-4150-9281-8cddd5bf5845 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:29:07.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4229" for this suite.
Dec 18 13:29:13.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:29:13.165: INFO: namespace configmap-4229 deletion completed in 6.136760601s

• [SLOW TEST:14.458 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:29:13.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:29:19.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5781" for this suite.
Dec 18 13:29:26.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:29:26.117: INFO: namespace namespaces-5781 deletion completed in 6.142675404s
STEP: Destroying namespace "nsdeletetest-1095" for this suite.
Dec 18 13:29:26.121: INFO: Namespace nsdeletetest-1095 was already deleted
STEP: Destroying namespace "nsdeletetest-6363" for this suite.
Dec 18 13:29:32.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:29:32.233: INFO: namespace nsdeletetest-6363 deletion completed in 6.112315692s

• [SLOW TEST:19.068 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:29:32.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 13:29:32.432: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.82808ms)
Dec 18 13:29:32.444: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.983576ms)
Dec 18 13:29:32.451: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.987351ms)
Dec 18 13:29:32.456: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.854782ms)
Dec 18 13:29:32.460: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.267866ms)
Dec 18 13:29:32.468: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.924941ms)
Dec 18 13:29:32.473: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.235179ms)
Dec 18 13:29:32.482: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.560106ms)
Dec 18 13:29:32.489: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.962139ms)
Dec 18 13:29:32.496: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.106499ms)
Dec 18 13:29:32.558: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 61.789929ms)
Dec 18 13:29:32.565: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.987121ms)
Dec 18 13:29:32.569: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.092609ms)
Dec 18 13:29:32.573: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.446356ms)
Dec 18 13:29:32.576: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.532227ms)
Dec 18 13:29:32.582: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.779955ms)
Dec 18 13:29:32.593: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.136046ms)
Dec 18 13:29:32.602: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.400212ms)
Dec 18 13:29:32.608: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.875636ms)
Dec 18 13:29:32.612: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.544671ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:29:32.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8894" for this suite.
Dec 18 13:29:38.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:29:38.809: INFO: namespace proxy-8894 deletion completed in 6.192324536s

• [SLOW TEST:6.575 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:29:38.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 18 13:29:38.984: INFO: Waiting up to 5m0s for pod "pod-84f9b711-4ee6-4204-b4ae-ff2c70ec432b" in namespace "emptydir-5615" to be "success or failure"
Dec 18 13:29:38.988: INFO: Pod "pod-84f9b711-4ee6-4204-b4ae-ff2c70ec432b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.777324ms
Dec 18 13:29:41.014: INFO: Pod "pod-84f9b711-4ee6-4204-b4ae-ff2c70ec432b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030036404s
Dec 18 13:29:43.020: INFO: Pod "pod-84f9b711-4ee6-4204-b4ae-ff2c70ec432b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036020936s
Dec 18 13:29:45.028: INFO: Pod "pod-84f9b711-4ee6-4204-b4ae-ff2c70ec432b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043414211s
Dec 18 13:29:47.040: INFO: Pod "pod-84f9b711-4ee6-4204-b4ae-ff2c70ec432b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05577754s
STEP: Saw pod success
Dec 18 13:29:47.041: INFO: Pod "pod-84f9b711-4ee6-4204-b4ae-ff2c70ec432b" satisfied condition "success or failure"
Dec 18 13:29:47.045: INFO: Trying to get logs from node iruya-node pod pod-84f9b711-4ee6-4204-b4ae-ff2c70ec432b container test-container: 
STEP: delete the pod
Dec 18 13:29:47.142: INFO: Waiting for pod pod-84f9b711-4ee6-4204-b4ae-ff2c70ec432b to disappear
Dec 18 13:29:47.146: INFO: Pod pod-84f9b711-4ee6-4204-b4ae-ff2c70ec432b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:29:47.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5615" for this suite.
Dec 18 13:29:53.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:29:53.317: INFO: namespace emptydir-5615 deletion completed in 6.159811862s

• [SLOW TEST:14.507 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:29:53.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 18 13:29:53.429: INFO: Waiting up to 5m0s for pod "pod-a932dcd9-e4e3-4a80-ab1a-f1493e253cd4" in namespace "emptydir-3334" to be "success or failure"
Dec 18 13:29:53.433: INFO: Pod "pod-a932dcd9-e4e3-4a80-ab1a-f1493e253cd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263029ms
Dec 18 13:29:55.444: INFO: Pod "pod-a932dcd9-e4e3-4a80-ab1a-f1493e253cd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015270522s
Dec 18 13:29:57.454: INFO: Pod "pod-a932dcd9-e4e3-4a80-ab1a-f1493e253cd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024823139s
Dec 18 13:29:59.463: INFO: Pod "pod-a932dcd9-e4e3-4a80-ab1a-f1493e253cd4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033937326s
Dec 18 13:30:01.489: INFO: Pod "pod-a932dcd9-e4e3-4a80-ab1a-f1493e253cd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059951979s
STEP: Saw pod success
Dec 18 13:30:01.489: INFO: Pod "pod-a932dcd9-e4e3-4a80-ab1a-f1493e253cd4" satisfied condition "success or failure"
Dec 18 13:30:01.539: INFO: Trying to get logs from node iruya-node pod pod-a932dcd9-e4e3-4a80-ab1a-f1493e253cd4 container test-container: 
STEP: delete the pod
Dec 18 13:30:01.622: INFO: Waiting for pod pod-a932dcd9-e4e3-4a80-ab1a-f1493e253cd4 to disappear
Dec 18 13:30:01.740: INFO: Pod pod-a932dcd9-e4e3-4a80-ab1a-f1493e253cd4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:30:01.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3334" for this suite.
Dec 18 13:30:07.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:30:08.094: INFO: namespace emptydir-3334 deletion completed in 6.27553399s

• [SLOW TEST:14.776 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:30:08.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-2e06ae1c-f395-4aa0-979f-c059d8abc7fb
STEP: Creating a pod to test consume configMaps
Dec 18 13:30:08.414: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6ab5d82b-f679-4b9a-812e-e3c13f2f09b9" in namespace "projected-7010" to be "success or failure"
Dec 18 13:30:08.453: INFO: Pod "pod-projected-configmaps-6ab5d82b-f679-4b9a-812e-e3c13f2f09b9": Phase="Pending", Reason="", readiness=false. Elapsed: 38.789946ms
Dec 18 13:30:10.461: INFO: Pod "pod-projected-configmaps-6ab5d82b-f679-4b9a-812e-e3c13f2f09b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047558054s
Dec 18 13:30:12.485: INFO: Pod "pod-projected-configmaps-6ab5d82b-f679-4b9a-812e-e3c13f2f09b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071300108s
Dec 18 13:30:14.507: INFO: Pod "pod-projected-configmaps-6ab5d82b-f679-4b9a-812e-e3c13f2f09b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093433785s
Dec 18 13:30:16.520: INFO: Pod "pod-projected-configmaps-6ab5d82b-f679-4b9a-812e-e3c13f2f09b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106549833s
STEP: Saw pod success
Dec 18 13:30:16.521: INFO: Pod "pod-projected-configmaps-6ab5d82b-f679-4b9a-812e-e3c13f2f09b9" satisfied condition "success or failure"
Dec 18 13:30:16.527: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-6ab5d82b-f679-4b9a-812e-e3c13f2f09b9 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 18 13:30:16.642: INFO: Waiting for pod pod-projected-configmaps-6ab5d82b-f679-4b9a-812e-e3c13f2f09b9 to disappear
Dec 18 13:30:16.648: INFO: Pod pod-projected-configmaps-6ab5d82b-f679-4b9a-812e-e3c13f2f09b9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:30:16.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7010" for this suite.
Dec 18 13:30:22.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:30:22.866: INFO: namespace projected-7010 deletion completed in 6.145475265s

• [SLOW TEST:14.771 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:30:22.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Dec 18 13:30:23.036: INFO: Waiting up to 5m0s for pod "client-containers-972b8fc2-de1d-4789-9a70-d323b0bf38b2" in namespace "containers-3797" to be "success or failure"
Dec 18 13:30:23.045: INFO: Pod "client-containers-972b8fc2-de1d-4789-9a70-d323b0bf38b2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.181395ms
Dec 18 13:30:25.123: INFO: Pod "client-containers-972b8fc2-de1d-4789-9a70-d323b0bf38b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08711653s
Dec 18 13:30:27.139: INFO: Pod "client-containers-972b8fc2-de1d-4789-9a70-d323b0bf38b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103011562s
Dec 18 13:30:29.152: INFO: Pod "client-containers-972b8fc2-de1d-4789-9a70-d323b0bf38b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115739863s
Dec 18 13:30:31.187: INFO: Pod "client-containers-972b8fc2-de1d-4789-9a70-d323b0bf38b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.151093909s
STEP: Saw pod success
Dec 18 13:30:31.187: INFO: Pod "client-containers-972b8fc2-de1d-4789-9a70-d323b0bf38b2" satisfied condition "success or failure"
Dec 18 13:30:31.191: INFO: Trying to get logs from node iruya-node pod client-containers-972b8fc2-de1d-4789-9a70-d323b0bf38b2 container test-container: 
STEP: delete the pod
Dec 18 13:30:31.254: INFO: Waiting for pod client-containers-972b8fc2-de1d-4789-9a70-d323b0bf38b2 to disappear
Dec 18 13:30:31.348: INFO: Pod client-containers-972b8fc2-de1d-4789-9a70-d323b0bf38b2 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:30:31.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3797" for this suite.
Dec 18 13:30:37.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:30:37.502: INFO: namespace containers-3797 deletion completed in 6.144620703s

• [SLOW TEST:14.634 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:30:37.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 18 13:30:37.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5898'
Dec 18 13:30:37.803: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 18 13:30:37.803: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 18 13:30:37.842: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-qrjrt]
Dec 18 13:30:37.842: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-qrjrt" in namespace "kubectl-5898" to be "running and ready"
Dec 18 13:30:37.852: INFO: Pod "e2e-test-nginx-rc-qrjrt": Phase="Pending", Reason="", readiness=false. Elapsed: 9.59498ms
Dec 18 13:30:39.869: INFO: Pod "e2e-test-nginx-rc-qrjrt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027075425s
Dec 18 13:30:41.880: INFO: Pod "e2e-test-nginx-rc-qrjrt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038115654s
Dec 18 13:30:43.903: INFO: Pod "e2e-test-nginx-rc-qrjrt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06087652s
Dec 18 13:30:45.946: INFO: Pod "e2e-test-nginx-rc-qrjrt": Phase="Running", Reason="", readiness=true. Elapsed: 8.10352834s
Dec 18 13:30:45.946: INFO: Pod "e2e-test-nginx-rc-qrjrt" satisfied condition "running and ready"
Dec 18 13:30:45.946: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-qrjrt]
Dec 18 13:30:45.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5898'
Dec 18 13:30:46.180: INFO: stderr: ""
Dec 18 13:30:46.180: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Dec 18 13:30:46.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5898'
Dec 18 13:30:46.342: INFO: stderr: ""
Dec 18 13:30:46.342: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:30:46.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5898" for this suite.
Dec 18 13:31:08.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:31:08.558: INFO: namespace kubectl-5898 deletion completed in 22.18388332s

• [SLOW TEST:31.056 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:31:08.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 18 13:31:17.949: INFO: 10 pods remaining
Dec 18 13:31:17.949: INFO: 0 pods has nil DeletionTimestamp
Dec 18 13:31:17.949: INFO: 
STEP: Gathering metrics
W1218 13:31:18.412692       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 18 13:31:18.413: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:31:18.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3154" for this suite.
Dec 18 13:31:28.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:31:28.867: INFO: namespace gc-3154 deletion completed in 10.4362714s

• [SLOW TEST:20.304 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:31:28.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6126
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 18 13:31:29.011: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 18 13:32:05.355: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6126 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 13:32:05.355: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 13:32:06.066: INFO: Found all expected endpoints: [netserver-0]
Dec 18 13:32:06.081: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.3:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6126 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 13:32:06.081: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 13:32:06.379: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:32:06.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6126" for this suite.
Dec 18 13:32:30.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:32:30.736: INFO: namespace pod-network-test-6126 deletion completed in 24.348495906s

• [SLOW TEST:61.866 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:32:30.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-7006, will wait for the garbage collector to delete the pods
Dec 18 13:32:40.972: INFO: Deleting Job.batch foo took: 8.25265ms
Dec 18 13:32:41.273: INFO: Terminating Job.batch foo pods took: 300.482104ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:33:26.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7006" for this suite.
Dec 18 13:33:32.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:33:32.721: INFO: namespace job-7006 deletion completed in 6.126546077s

• [SLOW TEST:61.985 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:33:32.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Dec 18 13:33:42.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-8ae02a85-8a8f-4717-bd77-a2413c57856d -c busybox-main-container --namespace=emptydir-3644 -- cat /usr/share/volumeshare/shareddata.txt'
Dec 18 13:33:43.408: INFO: stderr: ""
Dec 18 13:33:43.409: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:33:43.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3644" for this suite.
Dec 18 13:33:49.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:33:49.659: INFO: namespace emptydir-3644 deletion completed in 6.240551129s

• [SLOW TEST:16.937 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:33:49.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-f080574b-8ac3-4ff2-99d9-e1115d9d411d
Dec 18 13:33:49.770: INFO: Pod name my-hostname-basic-f080574b-8ac3-4ff2-99d9-e1115d9d411d: Found 0 pods out of 1
Dec 18 13:33:54.790: INFO: Pod name my-hostname-basic-f080574b-8ac3-4ff2-99d9-e1115d9d411d: Found 1 pods out of 1
Dec 18 13:33:54.791: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f080574b-8ac3-4ff2-99d9-e1115d9d411d" are running
Dec 18 13:33:58.814: INFO: Pod "my-hostname-basic-f080574b-8ac3-4ff2-99d9-e1115d9d411d-88msq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 13:33:49 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 13:33:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f080574b-8ac3-4ff2-99d9-e1115d9d411d]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 13:33:49 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f080574b-8ac3-4ff2-99d9-e1115d9d411d]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 13:33:49 +0000 UTC Reason: Message:}])
Dec 18 13:33:58.815: INFO: Trying to dial the pod
Dec 18 13:34:03.875: INFO: Controller my-hostname-basic-f080574b-8ac3-4ff2-99d9-e1115d9d411d: Got expected result from replica 1 [my-hostname-basic-f080574b-8ac3-4ff2-99d9-e1115d9d411d-88msq]: "my-hostname-basic-f080574b-8ac3-4ff2-99d9-e1115d9d411d-88msq", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:34:03.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5956" for this suite.
Dec 18 13:34:09.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:34:10.134: INFO: namespace replication-controller-5956 deletion completed in 6.24352999s

• [SLOW TEST:20.475 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:34:10.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Dec 18 13:34:10.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 18 13:34:10.382: INFO: stderr: ""
Dec 18 13:34:10.382: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:34:10.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1114" for this suite.
Dec 18 13:34:16.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:34:16.548: INFO: namespace kubectl-1114 deletion completed in 6.154117234s

• [SLOW TEST:6.414 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:34:16.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-b6a7a148-6647-430f-a396-8e25cc7943da
STEP: Creating a pod to test consume configMaps
Dec 18 13:34:16.793: INFO: Waiting up to 5m0s for pod "pod-configmaps-837662ab-b598-46d6-b990-3948d46e0fb5" in namespace "configmap-8676" to be "success or failure"
Dec 18 13:34:16.807: INFO: Pod "pod-configmaps-837662ab-b598-46d6-b990-3948d46e0fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.059151ms
Dec 18 13:34:18.817: INFO: Pod "pod-configmaps-837662ab-b598-46d6-b990-3948d46e0fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023263457s
Dec 18 13:34:20.827: INFO: Pod "pod-configmaps-837662ab-b598-46d6-b990-3948d46e0fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033631313s
Dec 18 13:34:22.834: INFO: Pod "pod-configmaps-837662ab-b598-46d6-b990-3948d46e0fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040046547s
Dec 18 13:34:24.844: INFO: Pod "pod-configmaps-837662ab-b598-46d6-b990-3948d46e0fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050205863s
Dec 18 13:34:26.861: INFO: Pod "pod-configmaps-837662ab-b598-46d6-b990-3948d46e0fb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067663069s
STEP: Saw pod success
Dec 18 13:34:26.862: INFO: Pod "pod-configmaps-837662ab-b598-46d6-b990-3948d46e0fb5" satisfied condition "success or failure"
Dec 18 13:34:26.872: INFO: Trying to get logs from node iruya-node pod pod-configmaps-837662ab-b598-46d6-b990-3948d46e0fb5 container configmap-volume-test: 
STEP: delete the pod
Dec 18 13:34:27.574: INFO: Waiting for pod pod-configmaps-837662ab-b598-46d6-b990-3948d46e0fb5 to disappear
Dec 18 13:34:27.585: INFO: Pod pod-configmaps-837662ab-b598-46d6-b990-3948d46e0fb5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:34:27.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8676" for this suite.
Dec 18 13:34:33.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:34:33.742: INFO: namespace configmap-8676 deletion completed in 6.143975911s

• [SLOW TEST:17.193 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:34:33.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 13:34:33.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 18 13:34:34.142: INFO: stderr: ""
Dec 18 13:34:34.143: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-14T21:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:34:34.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2138" for this suite.
Dec 18 13:34:40.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:34:40.336: INFO: namespace kubectl-2138 deletion completed in 6.168817347s

• [SLOW TEST:6.591 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:34:40.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 18 13:34:40.441: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 18 13:34:40.451: INFO: Waiting for terminating namespaces to be deleted...
Dec 18 13:34:40.455: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 18 13:34:40.473: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 18 13:34:40.473: INFO: 	Container weave ready: true, restart count 0
Dec 18 13:34:40.473: INFO: 	Container weave-npc ready: true, restart count 0
Dec 18 13:34:40.473: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 18 13:34:40.473: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 18 13:34:40.473: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 18 13:34:40.487: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 18 13:34:40.487: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 18 13:34:40.487: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 18 13:34:40.487: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 18 13:34:40.487: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 18 13:34:40.487: INFO: 	Container coredns ready: true, restart count 0
Dec 18 13:34:40.487: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 18 13:34:40.487: INFO: 	Container etcd ready: true, restart count 0
Dec 18 13:34:40.487: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 18 13:34:40.487: INFO: 	Container weave ready: true, restart count 0
Dec 18 13:34:40.487: INFO: 	Container weave-npc ready: true, restart count 0
Dec 18 13:34:40.487: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 18 13:34:40.487: INFO: 	Container coredns ready: true, restart count 0
Dec 18 13:34:40.487: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 18 13:34:40.487: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 18 13:34:40.487: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 18 13:34:40.487: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-306caa11-8222-4df9-955d-ec06ed260ede 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-306caa11-8222-4df9-955d-ec06ed260ede off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-306caa11-8222-4df9-955d-ec06ed260ede
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:34:58.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8343" for this suite.
Dec 18 13:35:19.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:35:19.245: INFO: namespace sched-pred-8343 deletion completed in 20.136822981s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:38.907 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:35:19.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 13:35:19.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-339dfa3c-570e-4d51-aaeb-e97d3815a066" in namespace "projected-741" to be "success or failure"
Dec 18 13:35:19.336: INFO: Pod "downwardapi-volume-339dfa3c-570e-4d51-aaeb-e97d3815a066": Phase="Pending", Reason="", readiness=false. Elapsed: 7.28493ms
Dec 18 13:35:21.354: INFO: Pod "downwardapi-volume-339dfa3c-570e-4d51-aaeb-e97d3815a066": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025846255s
Dec 18 13:35:23.374: INFO: Pod "downwardapi-volume-339dfa3c-570e-4d51-aaeb-e97d3815a066": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045518639s
Dec 18 13:35:25.393: INFO: Pod "downwardapi-volume-339dfa3c-570e-4d51-aaeb-e97d3815a066": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064113729s
Dec 18 13:35:27.402: INFO: Pod "downwardapi-volume-339dfa3c-570e-4d51-aaeb-e97d3815a066": Phase="Running", Reason="", readiness=true. Elapsed: 8.073866089s
Dec 18 13:35:29.443: INFO: Pod "downwardapi-volume-339dfa3c-570e-4d51-aaeb-e97d3815a066": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114924414s
STEP: Saw pod success
Dec 18 13:35:29.444: INFO: Pod "downwardapi-volume-339dfa3c-570e-4d51-aaeb-e97d3815a066" satisfied condition "success or failure"
Dec 18 13:35:29.456: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-339dfa3c-570e-4d51-aaeb-e97d3815a066 container client-container: 
STEP: delete the pod
Dec 18 13:35:29.520: INFO: Waiting for pod downwardapi-volume-339dfa3c-570e-4d51-aaeb-e97d3815a066 to disappear
Dec 18 13:35:29.526: INFO: Pod downwardapi-volume-339dfa3c-570e-4d51-aaeb-e97d3815a066 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:35:29.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-741" for this suite.
Dec 18 13:35:35.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:35:35.747: INFO: namespace projected-741 deletion completed in 6.215687013s

• [SLOW TEST:16.500 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:35:35.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Dec 18 13:35:43.973: INFO: Pod pod-hostip-9af17a4f-b092-4962-b3d3-718185e41c82 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:35:43.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7335" for this suite.
Dec 18 13:36:06.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:36:06.129: INFO: namespace pods-7335 deletion completed in 22.142137656s

• [SLOW TEST:30.382 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:36:06.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:36:14.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2131" for this suite.
Dec 18 13:36:20.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:36:20.513: INFO: namespace kubelet-test-2131 deletion completed in 6.277479052s

• [SLOW TEST:14.382 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:36:20.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 18 13:36:20.684: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 18 13:36:25.694: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:36:25.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8822" for this suite.
Dec 18 13:36:32.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:36:32.220: INFO: namespace replication-controller-8822 deletion completed in 6.353561564s

• [SLOW TEST:11.706 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:36:32.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 18 13:36:32.449: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 18 13:36:32.506: INFO: Waiting for terminating namespaces to be deleted...
Dec 18 13:36:32.512: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 18 13:36:32.531: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 18 13:36:32.531: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 18 13:36:32.531: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 18 13:36:32.531: INFO: 	Container weave ready: true, restart count 0
Dec 18 13:36:32.531: INFO: 	Container weave-npc ready: true, restart count 0
Dec 18 13:36:32.531: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 18 13:36:32.545: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 18 13:36:32.545: INFO: 	Container kube-controller-manager ready: true, restart count 10
Dec 18 13:36:32.545: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 18 13:36:32.545: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 18 13:36:32.545: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 18 13:36:32.545: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 18 13:36:32.545: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 18 13:36:32.545: INFO: 	Container kube-scheduler ready: true, restart count 7
Dec 18 13:36:32.545: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 18 13:36:32.545: INFO: 	Container coredns ready: true, restart count 0
Dec 18 13:36:32.545: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 18 13:36:32.545: INFO: 	Container etcd ready: true, restart count 0
Dec 18 13:36:32.545: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 18 13:36:32.545: INFO: 	Container weave ready: true, restart count 0
Dec 18 13:36:32.545: INFO: 	Container weave-npc ready: true, restart count 0
Dec 18 13:36:32.545: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 18 13:36:32.545: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e17aa4bfe57811], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:36:33.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8601" for this suite.
Dec 18 13:36:39.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:36:39.845: INFO: namespace sched-pred-8601 deletion completed in 6.252605189s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.622 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:36:39.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Dec 18 13:36:39.980: INFO: Waiting up to 5m0s for pod "var-expansion-403acbfe-96ec-4432-a6b3-9160d4c64c92" in namespace "var-expansion-2882" to be "success or failure"
Dec 18 13:36:39.990: INFO: Pod "var-expansion-403acbfe-96ec-4432-a6b3-9160d4c64c92": Phase="Pending", Reason="", readiness=false. Elapsed: 9.458363ms
Dec 18 13:36:42.000: INFO: Pod "var-expansion-403acbfe-96ec-4432-a6b3-9160d4c64c92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01949309s
Dec 18 13:36:44.014: INFO: Pod "var-expansion-403acbfe-96ec-4432-a6b3-9160d4c64c92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033510303s
Dec 18 13:36:46.032: INFO: Pod "var-expansion-403acbfe-96ec-4432-a6b3-9160d4c64c92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051849057s
Dec 18 13:36:48.043: INFO: Pod "var-expansion-403acbfe-96ec-4432-a6b3-9160d4c64c92": Phase="Running", Reason="", readiness=true. Elapsed: 8.06326992s
Dec 18 13:36:50.071: INFO: Pod "var-expansion-403acbfe-96ec-4432-a6b3-9160d4c64c92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091339658s
STEP: Saw pod success
Dec 18 13:36:50.072: INFO: Pod "var-expansion-403acbfe-96ec-4432-a6b3-9160d4c64c92" satisfied condition "success or failure"
Dec 18 13:36:50.075: INFO: Trying to get logs from node iruya-node pod var-expansion-403acbfe-96ec-4432-a6b3-9160d4c64c92 container dapi-container: 
STEP: delete the pod
Dec 18 13:36:50.149: INFO: Waiting for pod var-expansion-403acbfe-96ec-4432-a6b3-9160d4c64c92 to disappear
Dec 18 13:36:50.158: INFO: Pod var-expansion-403acbfe-96ec-4432-a6b3-9160d4c64c92 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:36:50.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2882" for this suite.
Dec 18 13:36:56.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:36:56.409: INFO: namespace var-expansion-2882 deletion completed in 6.240880757s

• [SLOW TEST:16.564 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:36:56.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-89cee4ba-9b97-40f7-8cd3-ac0db098bf13
STEP: Creating a pod to test consume secrets
Dec 18 13:36:56.530: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eb0b28a9-666f-4d05-b8ff-de45ff3e6be6" in namespace "projected-9742" to be "success or failure"
Dec 18 13:36:56.602: INFO: Pod "pod-projected-secrets-eb0b28a9-666f-4d05-b8ff-de45ff3e6be6": Phase="Pending", Reason="", readiness=false. Elapsed: 71.536546ms
Dec 18 13:36:58.611: INFO: Pod "pod-projected-secrets-eb0b28a9-666f-4d05-b8ff-de45ff3e6be6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080188916s
Dec 18 13:37:00.619: INFO: Pod "pod-projected-secrets-eb0b28a9-666f-4d05-b8ff-de45ff3e6be6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088630292s
Dec 18 13:37:02.669: INFO: Pod "pod-projected-secrets-eb0b28a9-666f-4d05-b8ff-de45ff3e6be6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138372024s
Dec 18 13:37:04.684: INFO: Pod "pod-projected-secrets-eb0b28a9-666f-4d05-b8ff-de45ff3e6be6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.153128113s
STEP: Saw pod success
Dec 18 13:37:04.684: INFO: Pod "pod-projected-secrets-eb0b28a9-666f-4d05-b8ff-de45ff3e6be6" satisfied condition "success or failure"
Dec 18 13:37:04.692: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-eb0b28a9-666f-4d05-b8ff-de45ff3e6be6 container projected-secret-volume-test: 
STEP: delete the pod
Dec 18 13:37:04.744: INFO: Waiting for pod pod-projected-secrets-eb0b28a9-666f-4d05-b8ff-de45ff3e6be6 to disappear
Dec 18 13:37:04.776: INFO: Pod pod-projected-secrets-eb0b28a9-666f-4d05-b8ff-de45ff3e6be6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:37:04.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9742" for this suite.
Dec 18 13:37:10.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:37:10.914: INFO: namespace projected-9742 deletion completed in 6.133036346s

• [SLOW TEST:14.505 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:37:10.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-9c052040-1893-4528-ab27-64c4aa940165
STEP: Creating a pod to test consume secrets
Dec 18 13:37:11.061: INFO: Waiting up to 5m0s for pod "pod-secrets-0c491380-0d5b-49e5-acf1-87f6a002bb19" in namespace "secrets-2672" to be "success or failure"
Dec 18 13:37:11.064: INFO: Pod "pod-secrets-0c491380-0d5b-49e5-acf1-87f6a002bb19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.531816ms
Dec 18 13:37:13.075: INFO: Pod "pod-secrets-0c491380-0d5b-49e5-acf1-87f6a002bb19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013695873s
Dec 18 13:37:15.082: INFO: Pod "pod-secrets-0c491380-0d5b-49e5-acf1-87f6a002bb19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01998906s
Dec 18 13:37:17.092: INFO: Pod "pod-secrets-0c491380-0d5b-49e5-acf1-87f6a002bb19": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030044988s
Dec 18 13:37:19.099: INFO: Pod "pod-secrets-0c491380-0d5b-49e5-acf1-87f6a002bb19": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037278371s
Dec 18 13:37:21.168: INFO: Pod "pod-secrets-0c491380-0d5b-49e5-acf1-87f6a002bb19": Phase="Pending", Reason="", readiness=false. Elapsed: 10.106307067s
Dec 18 13:37:23.177: INFO: Pod "pod-secrets-0c491380-0d5b-49e5-acf1-87f6a002bb19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.114972149s
STEP: Saw pod success
Dec 18 13:37:23.177: INFO: Pod "pod-secrets-0c491380-0d5b-49e5-acf1-87f6a002bb19" satisfied condition "success or failure"
Dec 18 13:37:23.180: INFO: Trying to get logs from node iruya-node pod pod-secrets-0c491380-0d5b-49e5-acf1-87f6a002bb19 container secret-volume-test: 
STEP: delete the pod
Dec 18 13:37:23.291: INFO: Waiting for pod pod-secrets-0c491380-0d5b-49e5-acf1-87f6a002bb19 to disappear
Dec 18 13:37:23.452: INFO: Pod pod-secrets-0c491380-0d5b-49e5-acf1-87f6a002bb19 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:37:23.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2672" for this suite.
Dec 18 13:37:29.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:37:29.908: INFO: namespace secrets-2672 deletion completed in 6.418585284s

• [SLOW TEST:18.993 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:37:29.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1218 13:37:40.064446       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 18 13:37:40.064: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:37:40.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2936" for this suite.
Dec 18 13:37:48.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:37:48.285: INFO: namespace gc-2936 deletion completed in 8.21612818s

• [SLOW TEST:18.374 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:37:48.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 18 13:37:48.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-8980'
Dec 18 13:37:51.458: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 18 13:37:51.458: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Dec 18 13:37:53.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8980'
Dec 18 13:37:53.754: INFO: stderr: ""
Dec 18 13:37:53.755: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:37:53.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8980" for this suite.
Dec 18 13:37:59.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:37:59.962: INFO: namespace kubectl-8980 deletion completed in 6.179057438s

• [SLOW TEST:11.677 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:37:59.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 13:38:00.123: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a65d9aa6-d865-4f76-8982-1e3206c621c5" in namespace "projected-7539" to be "success or failure"
Dec 18 13:38:00.132: INFO: Pod "downwardapi-volume-a65d9aa6-d865-4f76-8982-1e3206c621c5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.059089ms
Dec 18 13:38:02.140: INFO: Pod "downwardapi-volume-a65d9aa6-d865-4f76-8982-1e3206c621c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017564947s
Dec 18 13:38:04.149: INFO: Pod "downwardapi-volume-a65d9aa6-d865-4f76-8982-1e3206c621c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025939296s
Dec 18 13:38:06.157: INFO: Pod "downwardapi-volume-a65d9aa6-d865-4f76-8982-1e3206c621c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034598498s
Dec 18 13:38:08.167: INFO: Pod "downwardapi-volume-a65d9aa6-d865-4f76-8982-1e3206c621c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043945732s
STEP: Saw pod success
Dec 18 13:38:08.167: INFO: Pod "downwardapi-volume-a65d9aa6-d865-4f76-8982-1e3206c621c5" satisfied condition "success or failure"
Dec 18 13:38:08.170: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a65d9aa6-d865-4f76-8982-1e3206c621c5 container client-container: 
STEP: delete the pod
Dec 18 13:38:08.269: INFO: Waiting for pod downwardapi-volume-a65d9aa6-d865-4f76-8982-1e3206c621c5 to disappear
Dec 18 13:38:08.277: INFO: Pod downwardapi-volume-a65d9aa6-d865-4f76-8982-1e3206c621c5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:38:08.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7539" for this suite.
Dec 18 13:38:14.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:38:14.420: INFO: namespace projected-7539 deletion completed in 6.139405372s

• [SLOW TEST:14.457 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:38:14.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-0e8f390a-2a17-4ad4-8328-ef30f3f253d8
STEP: Creating a pod to test consume secrets
Dec 18 13:38:14.552: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-298f6974-0bee-4702-9430-6c3668c602a3" in namespace "projected-5848" to be "success or failure"
Dec 18 13:38:14.557: INFO: Pod "pod-projected-secrets-298f6974-0bee-4702-9430-6c3668c602a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.84076ms
Dec 18 13:38:16.572: INFO: Pod "pod-projected-secrets-298f6974-0bee-4702-9430-6c3668c602a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019881222s
Dec 18 13:38:18.580: INFO: Pod "pod-projected-secrets-298f6974-0bee-4702-9430-6c3668c602a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027938s
Dec 18 13:38:20.601: INFO: Pod "pod-projected-secrets-298f6974-0bee-4702-9430-6c3668c602a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048659198s
Dec 18 13:38:22.618: INFO: Pod "pod-projected-secrets-298f6974-0bee-4702-9430-6c3668c602a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06537526s
STEP: Saw pod success
Dec 18 13:38:22.618: INFO: Pod "pod-projected-secrets-298f6974-0bee-4702-9430-6c3668c602a3" satisfied condition "success or failure"
Dec 18 13:38:22.655: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-298f6974-0bee-4702-9430-6c3668c602a3 container projected-secret-volume-test: 
STEP: delete the pod
Dec 18 13:38:22.812: INFO: Waiting for pod pod-projected-secrets-298f6974-0bee-4702-9430-6c3668c602a3 to disappear
Dec 18 13:38:22.817: INFO: Pod pod-projected-secrets-298f6974-0bee-4702-9430-6c3668c602a3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:38:22.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5848" for this suite.
Dec 18 13:38:28.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:38:28.976: INFO: namespace projected-5848 deletion completed in 6.151435254s

• [SLOW TEST:14.555 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:38:28.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 18 13:38:29.409: INFO: Number of nodes with available pods: 0
Dec 18 13:38:29.410: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:38:30.431: INFO: Number of nodes with available pods: 0
Dec 18 13:38:30.431: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:38:31.616: INFO: Number of nodes with available pods: 0
Dec 18 13:38:31.616: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:38:32.425: INFO: Number of nodes with available pods: 0
Dec 18 13:38:32.426: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:38:33.451: INFO: Number of nodes with available pods: 0
Dec 18 13:38:33.451: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:38:34.421: INFO: Number of nodes with available pods: 0
Dec 18 13:38:34.421: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:38:36.388: INFO: Number of nodes with available pods: 0
Dec 18 13:38:36.389: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:38:37.271: INFO: Number of nodes with available pods: 0
Dec 18 13:38:37.271: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:38:37.684: INFO: Number of nodes with available pods: 0
Dec 18 13:38:37.685: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:38:39.117: INFO: Number of nodes with available pods: 0
Dec 18 13:38:39.117: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:38:39.428: INFO: Number of nodes with available pods: 0
Dec 18 13:38:39.428: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:38:40.427: INFO: Number of nodes with available pods: 2
Dec 18 13:38:40.427: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 18 13:38:40.533: INFO: Number of nodes with available pods: 2
Dec 18 13:38:40.534: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4609, will wait for the garbage collector to delete the pods
Dec 18 13:38:42.151: INFO: Deleting DaemonSet.extensions daemon-set took: 8.308508ms
Dec 18 13:38:42.853: INFO: Terminating DaemonSet.extensions daemon-set pods took: 701.563121ms
Dec 18 13:38:51.862: INFO: Number of nodes with available pods: 0
Dec 18 13:38:51.862: INFO: Number of running nodes: 0, number of available pods: 0
Dec 18 13:38:51.870: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4609/daemonsets","resourceVersion":"17140895"},"items":null}

Dec 18 13:38:51.876: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4609/pods","resourceVersion":"17140895"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:38:51.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4609" for this suite.
Dec 18 13:38:57.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:38:58.045: INFO: namespace daemonsets-4609 deletion completed in 6.145325694s

• [SLOW TEST:29.067 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:38:58.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 18 13:38:58.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4805'
Dec 18 13:38:58.602: INFO: stderr: ""
Dec 18 13:38:58.603: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 18 13:38:59.619: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:38:59.620: INFO: Found 0 / 1
Dec 18 13:39:00.616: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:39:00.616: INFO: Found 0 / 1
Dec 18 13:39:01.626: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:39:01.626: INFO: Found 0 / 1
Dec 18 13:39:02.632: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:39:02.632: INFO: Found 0 / 1
Dec 18 13:39:03.629: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:39:03.630: INFO: Found 0 / 1
Dec 18 13:39:04.622: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:39:04.622: INFO: Found 0 / 1
Dec 18 13:39:05.617: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:39:05.617: INFO: Found 1 / 1
Dec 18 13:39:05.617: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 18 13:39:05.622: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:39:05.622: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 18 13:39:05.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-88ppx --namespace=kubectl-4805 -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 18 13:39:05.791: INFO: stderr: ""
Dec 18 13:39:05.791: INFO: stdout: "pod/redis-master-88ppx patched\n"
STEP: checking annotations
Dec 18 13:39:05.858: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 13:39:05.859: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:39:05.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4805" for this suite.
Dec 18 13:39:27.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:39:28.011: INFO: namespace kubectl-4805 deletion completed in 22.145997416s

• [SLOW TEST:29.966 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:39:28.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 13:39:28.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b70af11f-032b-487e-a8a8-dae8a3b996d4" in namespace "projected-209" to be "success or failure"
Dec 18 13:39:28.134: INFO: Pod "downwardapi-volume-b70af11f-032b-487e-a8a8-dae8a3b996d4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.05445ms
Dec 18 13:39:30.141: INFO: Pod "downwardapi-volume-b70af11f-032b-487e-a8a8-dae8a3b996d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018498508s
Dec 18 13:39:32.162: INFO: Pod "downwardapi-volume-b70af11f-032b-487e-a8a8-dae8a3b996d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039746254s
Dec 18 13:39:34.211: INFO: Pod "downwardapi-volume-b70af11f-032b-487e-a8a8-dae8a3b996d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088759634s
Dec 18 13:39:36.224: INFO: Pod "downwardapi-volume-b70af11f-032b-487e-a8a8-dae8a3b996d4": Phase="Running", Reason="", readiness=true. Elapsed: 8.101312046s
Dec 18 13:39:38.291: INFO: Pod "downwardapi-volume-b70af11f-032b-487e-a8a8-dae8a3b996d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.168129687s
STEP: Saw pod success
Dec 18 13:39:38.291: INFO: Pod "downwardapi-volume-b70af11f-032b-487e-a8a8-dae8a3b996d4" satisfied condition "success or failure"
Dec 18 13:39:38.303: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b70af11f-032b-487e-a8a8-dae8a3b996d4 container client-container: 
STEP: delete the pod
Dec 18 13:39:38.419: INFO: Waiting for pod downwardapi-volume-b70af11f-032b-487e-a8a8-dae8a3b996d4 to disappear
Dec 18 13:39:38.430: INFO: Pod downwardapi-volume-b70af11f-032b-487e-a8a8-dae8a3b996d4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:39:38.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-209" for this suite.
Dec 18 13:39:44.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:39:44.618: INFO: namespace projected-209 deletion completed in 6.183045811s

• [SLOW TEST:16.607 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:39:44.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:40:36.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9009" for this suite.
Dec 18 13:40:42.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:40:42.251: INFO: namespace container-runtime-9009 deletion completed in 6.145465749s

• [SLOW TEST:57.633 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:40:42.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-2mbg
STEP: Creating a pod to test atomic-volume-subpath
Dec 18 13:40:42.474: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-2mbg" in namespace "subpath-2279" to be "success or failure"
Dec 18 13:40:42.485: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Pending", Reason="", readiness=false. Elapsed: 9.992578ms
Dec 18 13:40:44.495: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019977609s
Dec 18 13:40:46.521: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04590955s
Dec 18 13:40:48.536: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061124594s
Dec 18 13:40:50.566: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Running", Reason="", readiness=true. Elapsed: 8.091421465s
Dec 18 13:40:52.635: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Running", Reason="", readiness=true. Elapsed: 10.160791853s
Dec 18 13:40:54.646: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Running", Reason="", readiness=true. Elapsed: 12.171246345s
Dec 18 13:40:56.655: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Running", Reason="", readiness=true. Elapsed: 14.180699317s
Dec 18 13:40:58.677: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Running", Reason="", readiness=true. Elapsed: 16.202746706s
Dec 18 13:41:00.689: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Running", Reason="", readiness=true. Elapsed: 18.214144822s
Dec 18 13:41:02.696: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Running", Reason="", readiness=true. Elapsed: 20.221659304s
Dec 18 13:41:04.704: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Running", Reason="", readiness=true. Elapsed: 22.229295819s
Dec 18 13:41:06.719: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Running", Reason="", readiness=true. Elapsed: 24.243975543s
Dec 18 13:41:08.728: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Running", Reason="", readiness=true. Elapsed: 26.253853419s
Dec 18 13:41:10.739: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Running", Reason="", readiness=true. Elapsed: 28.264423793s
Dec 18 13:41:12.749: INFO: Pod "pod-subpath-test-downwardapi-2mbg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.273925282s
STEP: Saw pod success
Dec 18 13:41:12.749: INFO: Pod "pod-subpath-test-downwardapi-2mbg" satisfied condition "success or failure"
Dec 18 13:41:12.753: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-2mbg container test-container-subpath-downwardapi-2mbg: 
STEP: delete the pod
Dec 18 13:41:12.881: INFO: Waiting for pod pod-subpath-test-downwardapi-2mbg to disappear
Dec 18 13:41:12.896: INFO: Pod pod-subpath-test-downwardapi-2mbg no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-2mbg
Dec 18 13:41:12.896: INFO: Deleting pod "pod-subpath-test-downwardapi-2mbg" in namespace "subpath-2279"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:41:12.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2279" for this suite.
Dec 18 13:41:18.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:41:19.048: INFO: namespace subpath-2279 deletion completed in 6.139315647s

• [SLOW TEST:36.796 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:41:19.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 13:41:19.090: INFO: Creating ReplicaSet my-hostname-basic-d8bdc727-06f7-4f3e-aab2-8327175fea0d
Dec 18 13:41:19.188: INFO: Pod name my-hostname-basic-d8bdc727-06f7-4f3e-aab2-8327175fea0d: Found 0 pods out of 1
Dec 18 13:41:24.197: INFO: Pod name my-hostname-basic-d8bdc727-06f7-4f3e-aab2-8327175fea0d: Found 1 pods out of 1
Dec 18 13:41:24.197: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d8bdc727-06f7-4f3e-aab2-8327175fea0d" is running
Dec 18 13:41:28.207: INFO: Pod "my-hostname-basic-d8bdc727-06f7-4f3e-aab2-8327175fea0d-scch7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 13:41:19 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 13:41:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d8bdc727-06f7-4f3e-aab2-8327175fea0d]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 13:41:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d8bdc727-06f7-4f3e-aab2-8327175fea0d]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-18 13:41:19 +0000 UTC Reason: Message:}])
Dec 18 13:41:28.207: INFO: Trying to dial the pod
Dec 18 13:41:33.243: INFO: Controller my-hostname-basic-d8bdc727-06f7-4f3e-aab2-8327175fea0d: Got expected result from replica 1 [my-hostname-basic-d8bdc727-06f7-4f3e-aab2-8327175fea0d-scch7]: "my-hostname-basic-d8bdc727-06f7-4f3e-aab2-8327175fea0d-scch7", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:41:33.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4450" for this suite.
Dec 18 13:41:39.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:41:39.555: INFO: namespace replicaset-4450 deletion completed in 6.306821706s

• [SLOW TEST:20.506 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:41:39.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1218 13:42:21.194441       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 18 13:42:21.194: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:42:21.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7456" for this suite.
Dec 18 13:42:33.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:42:33.318: INFO: namespace gc-7456 deletion completed in 12.116058159s

• [SLOW TEST:53.760 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:42:33.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 18 13:42:35.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2126'
Dec 18 13:42:37.081: INFO: stderr: ""
Dec 18 13:42:37.081: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Dec 18 13:42:37.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2126'
Dec 18 13:42:45.812: INFO: stderr: ""
Dec 18 13:42:45.812: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:42:45.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2126" for this suite.
Dec 18 13:42:51.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:42:52.036: INFO: namespace kubectl-2126 deletion completed in 6.217299731s

• [SLOW TEST:18.719 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:42:52.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 13:42:52.119: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b089e703-084d-4f38-9c5d-199092308bef" in namespace "downward-api-9834" to be "success or failure"
Dec 18 13:42:52.123: INFO: Pod "downwardapi-volume-b089e703-084d-4f38-9c5d-199092308bef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.733773ms
Dec 18 13:42:54.135: INFO: Pod "downwardapi-volume-b089e703-084d-4f38-9c5d-199092308bef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015071041s
Dec 18 13:42:56.179: INFO: Pod "downwardapi-volume-b089e703-084d-4f38-9c5d-199092308bef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059234551s
Dec 18 13:42:58.207: INFO: Pod "downwardapi-volume-b089e703-084d-4f38-9c5d-199092308bef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087528016s
Dec 18 13:43:00.218: INFO: Pod "downwardapi-volume-b089e703-084d-4f38-9c5d-199092308bef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098435092s
STEP: Saw pod success
Dec 18 13:43:00.218: INFO: Pod "downwardapi-volume-b089e703-084d-4f38-9c5d-199092308bef" satisfied condition "success or failure"
Dec 18 13:43:00.225: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b089e703-084d-4f38-9c5d-199092308bef container client-container: 
STEP: delete the pod
Dec 18 13:43:00.284: INFO: Waiting for pod downwardapi-volume-b089e703-084d-4f38-9c5d-199092308bef to disappear
Dec 18 13:43:00.344: INFO: Pod downwardapi-volume-b089e703-084d-4f38-9c5d-199092308bef no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:43:00.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9834" for this suite.
Dec 18 13:43:06.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:43:06.691: INFO: namespace downward-api-9834 deletion completed in 6.326100178s

• [SLOW TEST:14.653 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:43:06.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 18 13:43:06.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4659'
Dec 18 13:43:06.971: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 18 13:43:06.971: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Dec 18 13:43:06.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-4659'
Dec 18 13:43:07.303: INFO: stderr: ""
Dec 18 13:43:07.303: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:43:07.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4659" for this suite.
Dec 18 13:43:13.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:43:13.466: INFO: namespace kubectl-4659 deletion completed in 6.156274887s

• [SLOW TEST:6.774 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:43:13.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 18 13:43:22.207: INFO: Successfully updated pod "labelsupdate5404f764-4e9b-4c67-a99b-9ec5b6b3771c"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:43:24.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3416" for this suite.
Dec 18 13:43:46.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:43:46.506: INFO: namespace downward-api-3416 deletion completed in 22.197298247s

• [SLOW TEST:33.040 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:43:46.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:43:54.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5940" for this suite.
Dec 18 13:44:56.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:44:57.013: INFO: namespace kubelet-test-5940 deletion completed in 1m2.28380751s

• [SLOW TEST:70.506 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:44:57.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-ce41cc19-7485-44bb-acae-4df7574f53e6
STEP: Creating configMap with name cm-test-opt-upd-a256e051-7fe8-4318-a0a9-0666c04fdaba
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-ce41cc19-7485-44bb-acae-4df7574f53e6
STEP: Updating configmap cm-test-opt-upd-a256e051-7fe8-4318-a0a9-0666c04fdaba
STEP: Creating configMap with name cm-test-opt-create-6927e888-cf63-4402-844e-b8649f17650b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:45:11.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1433" for this suite.
Dec 18 13:45:33.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:45:33.879: INFO: namespace projected-1433 deletion completed in 22.174591387s

• [SLOW TEST:36.865 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:45:33.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 13:45:34.100: INFO: Create a RollingUpdate DaemonSet
Dec 18 13:45:34.112: INFO: Check that daemon pods launch on every node of the cluster
Dec 18 13:45:34.129: INFO: Number of nodes with available pods: 0
Dec 18 13:45:34.129: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:45:35.143: INFO: Number of nodes with available pods: 0
Dec 18 13:45:35.143: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:45:36.772: INFO: Number of nodes with available pods: 0
Dec 18 13:45:36.773: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:45:37.301: INFO: Number of nodes with available pods: 0
Dec 18 13:45:37.301: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:45:38.519: INFO: Number of nodes with available pods: 0
Dec 18 13:45:38.519: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:45:39.145: INFO: Number of nodes with available pods: 0
Dec 18 13:45:39.145: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:45:40.141: INFO: Number of nodes with available pods: 0
Dec 18 13:45:40.141: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:45:42.589: INFO: Number of nodes with available pods: 0
Dec 18 13:45:42.589: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:45:44.049: INFO: Number of nodes with available pods: 0
Dec 18 13:45:44.049: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:45:44.143: INFO: Number of nodes with available pods: 0
Dec 18 13:45:44.143: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:45:45.148: INFO: Number of nodes with available pods: 0
Dec 18 13:45:45.149: INFO: Node iruya-node is running more than one daemon pod
Dec 18 13:45:46.144: INFO: Number of nodes with available pods: 2
Dec 18 13:45:46.145: INFO: Number of running nodes: 2, number of available pods: 2
Dec 18 13:45:46.145: INFO: Update the DaemonSet to trigger a rollout
Dec 18 13:45:46.156: INFO: Updating DaemonSet daemon-set
Dec 18 13:45:57.200: INFO: Roll back the DaemonSet before rollout is complete
Dec 18 13:45:57.293: INFO: Updating DaemonSet daemon-set
Dec 18 13:45:57.293: INFO: Make sure DaemonSet rollback is complete
Dec 18 13:45:57.313: INFO: Wrong image for pod: daemon-set-dbh6g. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 18 13:45:57.313: INFO: Pod daemon-set-dbh6g is not available
Dec 18 13:45:58.350: INFO: Wrong image for pod: daemon-set-dbh6g. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 18 13:45:58.350: INFO: Pod daemon-set-dbh6g is not available
Dec 18 13:45:59.347: INFO: Wrong image for pod: daemon-set-dbh6g. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 18 13:45:59.347: INFO: Pod daemon-set-dbh6g is not available
Dec 18 13:46:00.362: INFO: Wrong image for pod: daemon-set-dbh6g. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 18 13:46:00.363: INFO: Pod daemon-set-dbh6g is not available
Dec 18 13:46:01.382: INFO: Wrong image for pod: daemon-set-dbh6g. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 18 13:46:01.382: INFO: Pod daemon-set-dbh6g is not available
Dec 18 13:46:02.345: INFO: Wrong image for pod: daemon-set-dbh6g. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 18 13:46:02.345: INFO: Pod daemon-set-dbh6g is not available
Dec 18 13:46:03.346: INFO: Pod daemon-set-mbcbc is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8062, will wait for the garbage collector to delete the pods
Dec 18 13:46:03.448: INFO: Deleting DaemonSet.extensions daemon-set took: 21.352906ms
Dec 18 13:46:03.749: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.943065ms
Dec 18 13:46:10.165: INFO: Number of nodes with available pods: 0
Dec 18 13:46:10.165: INFO: Number of running nodes: 0, number of available pods: 0
Dec 18 13:46:10.168: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8062/daemonsets","resourceVersion":"17142092"},"items":null}

Dec 18 13:46:10.171: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8062/pods","resourceVersion":"17142092"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:46:10.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8062" for this suite.
Dec 18 13:46:16.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:46:16.331: INFO: namespace daemonsets-8062 deletion completed in 6.146138545s

• [SLOW TEST:42.452 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:46:16.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 18 13:46:25.246: INFO: Successfully updated pod "annotationupdatede01e8f1-d140-4eec-9fac-8ab2a248e5af"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:46:27.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2065" for this suite.
Dec 18 13:46:51.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:46:51.527: INFO: namespace downward-api-2065 deletion completed in 24.213107816s

• [SLOW TEST:35.195 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:46:51.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-8cwb
STEP: Creating a pod to test atomic-volume-subpath
Dec 18 13:46:51.760: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-8cwb" in namespace "subpath-2056" to be "success or failure"
Dec 18 13:46:51.768: INFO: Pod "pod-subpath-test-secret-8cwb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.291791ms
Dec 18 13:46:53.784: INFO: Pod "pod-subpath-test-secret-8cwb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023437596s
Dec 18 13:46:55.798: INFO: Pod "pod-subpath-test-secret-8cwb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037743047s
Dec 18 13:46:57.811: INFO: Pod "pod-subpath-test-secret-8cwb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051380077s
Dec 18 13:46:59.829: INFO: Pod "pod-subpath-test-secret-8cwb": Phase="Running", Reason="", readiness=true. Elapsed: 8.069004735s
Dec 18 13:47:01.840: INFO: Pod "pod-subpath-test-secret-8cwb": Phase="Running", Reason="", readiness=true. Elapsed: 10.08030594s
Dec 18 13:47:03.857: INFO: Pod "pod-subpath-test-secret-8cwb": Phase="Running", Reason="", readiness=true. Elapsed: 12.096727339s
Dec 18 13:47:05.868: INFO: Pod "pod-subpath-test-secret-8cwb": Phase="Running", Reason="", readiness=true. Elapsed: 14.10835047s
Dec 18 13:47:07.884: INFO: Pod "pod-subpath-test-secret-8cwb": Phase="Running", Reason="", readiness=true. Elapsed: 16.123411964s
Dec 18 13:47:10.389: INFO: Pod "pod-subpath-test-secret-8cwb": Phase="Running", Reason="", readiness=true. Elapsed: 18.628881017s
Dec 18 13:47:12.403: INFO: Pod "pod-subpath-test-secret-8cwb": Phase="Running", Reason="", readiness=true. Elapsed: 20.642916869s
Dec 18 13:47:14.412: INFO: Pod "pod-subpath-test-secret-8cwb": Phase="Running", Reason="", readiness=true. Elapsed: 22.652306103s
Dec 18 13:47:16.421: INFO: Pod "pod-subpath-test-secret-8cwb": Phase="Running", Reason="", readiness=true. Elapsed: 24.661202338s
Dec 18 13:47:18.438: INFO: Pod "pod-subpath-test-secret-8cwb": Phase="Running", Reason="", readiness=true. Elapsed: 26.677900219s
Dec 18 13:47:20.457: INFO: Pod "pod-subpath-test-secret-8cwb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.696676456s
STEP: Saw pod success
Dec 18 13:47:20.458: INFO: Pod "pod-subpath-test-secret-8cwb" satisfied condition "success or failure"
Dec 18 13:47:20.467: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-8cwb container test-container-subpath-secret-8cwb: 
STEP: delete the pod
Dec 18 13:47:20.531: INFO: Waiting for pod pod-subpath-test-secret-8cwb to disappear
Dec 18 13:47:20.546: INFO: Pod pod-subpath-test-secret-8cwb no longer exists
STEP: Deleting pod pod-subpath-test-secret-8cwb
Dec 18 13:47:20.546: INFO: Deleting pod "pod-subpath-test-secret-8cwb" in namespace "subpath-2056"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:47:20.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2056" for this suite.
Dec 18 13:47:26.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:47:26.691: INFO: namespace subpath-2056 deletion completed in 6.136014593s

• [SLOW TEST:35.162 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:47:26.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 13:47:26.812: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 18 13:47:31.834: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 18 13:47:33.859: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 18 13:47:33.911: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2022,SelfLink:/apis/apps/v1/namespaces/deployment-2022/deployments/test-cleanup-deployment,UID:162d0370-702d-411b-8763-5e81f2450b97,ResourceVersion:17142299,Generation:1,CreationTimestamp:2019-12-18 13:47:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Dec 18 13:47:33.919: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Dec 18 13:47:33.920: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Dec 18 13:47:33.921: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2022,SelfLink:/apis/apps/v1/namespaces/deployment-2022/replicasets/test-cleanup-controller,UID:54e096d0-a6d9-4d6b-80a3-3252e2f946d5,ResourceVersion:17142300,Generation:1,CreationTimestamp:2019-12-18 13:47:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 162d0370-702d-411b-8763-5e81f2450b97 0xc002609847 0xc002609848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 18 13:47:33.997: INFO: Pod "test-cleanup-controller-f9qxz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-f9qxz,GenerateName:test-cleanup-controller-,Namespace:deployment-2022,SelfLink:/api/v1/namespaces/deployment-2022/pods/test-cleanup-controller-f9qxz,UID:2ae8386f-f392-44f4-a6d0-2709e09a4693,ResourceVersion:17142297,Generation:0,CreationTimestamp:2019-12-18 13:47:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 54e096d0-a6d9-4d6b-80a3-3252e2f946d5 0xc002609db7 0xc002609db8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8ddhx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8ddhx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8ddhx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002609e30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002609e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:47:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:47:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:47:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 13:47:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-18 13:47:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-18 13:47:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://dba2195049b6579d686fd9554c9ce06d93cfaa5136f95316a076040cb54fb9a5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:47:33.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2022" for this suite.
Dec 18 13:47:42.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:47:42.509: INFO: namespace deployment-2022 deletion completed in 8.444581255s

• [SLOW TEST:15.815 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:47:42.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-3712d179-2bbc-4462-bf68-bf637d723a25
STEP: Creating a pod to test consume configMaps
Dec 18 13:47:42.754: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-953cfa79-1a8f-4566-be3c-bafa2c09b32a" in namespace "projected-1190" to be "success or failure"
Dec 18 13:47:42.765: INFO: Pod "pod-projected-configmaps-953cfa79-1a8f-4566-be3c-bafa2c09b32a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.836318ms
Dec 18 13:47:44.780: INFO: Pod "pod-projected-configmaps-953cfa79-1a8f-4566-be3c-bafa2c09b32a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025056343s
Dec 18 13:47:46.802: INFO: Pod "pod-projected-configmaps-953cfa79-1a8f-4566-be3c-bafa2c09b32a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047291323s
Dec 18 13:47:48.834: INFO: Pod "pod-projected-configmaps-953cfa79-1a8f-4566-be3c-bafa2c09b32a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079257319s
Dec 18 13:47:50.849: INFO: Pod "pod-projected-configmaps-953cfa79-1a8f-4566-be3c-bafa2c09b32a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094295426s
Dec 18 13:47:52.876: INFO: Pod "pod-projected-configmaps-953cfa79-1a8f-4566-be3c-bafa2c09b32a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.121494049s
STEP: Saw pod success
Dec 18 13:47:52.876: INFO: Pod "pod-projected-configmaps-953cfa79-1a8f-4566-be3c-bafa2c09b32a" satisfied condition "success or failure"
Dec 18 13:47:52.886: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-953cfa79-1a8f-4566-be3c-bafa2c09b32a container projected-configmap-volume-test: 
STEP: delete the pod
Dec 18 13:47:53.036: INFO: Waiting for pod pod-projected-configmaps-953cfa79-1a8f-4566-be3c-bafa2c09b32a to disappear
Dec 18 13:47:53.050: INFO: Pod pod-projected-configmaps-953cfa79-1a8f-4566-be3c-bafa2c09b32a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:47:53.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1190" for this suite.
Dec 18 13:47:59.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:47:59.274: INFO: namespace projected-1190 deletion completed in 6.218465323s

• [SLOW TEST:16.765 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:47:59.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 13:47:59.487: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82fdf96b-8714-4ca0-8794-d66124414d0d" in namespace "downward-api-7634" to be "success or failure"
Dec 18 13:47:59.615: INFO: Pod "downwardapi-volume-82fdf96b-8714-4ca0-8794-d66124414d0d": Phase="Pending", Reason="", readiness=false. Elapsed: 128.008581ms
Dec 18 13:48:01.622: INFO: Pod "downwardapi-volume-82fdf96b-8714-4ca0-8794-d66124414d0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135315489s
Dec 18 13:48:03.643: INFO: Pod "downwardapi-volume-82fdf96b-8714-4ca0-8794-d66124414d0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156021491s
Dec 18 13:48:05.656: INFO: Pod "downwardapi-volume-82fdf96b-8714-4ca0-8794-d66124414d0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16851102s
Dec 18 13:48:07.719: INFO: Pod "downwardapi-volume-82fdf96b-8714-4ca0-8794-d66124414d0d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.23163571s
Dec 18 13:48:09.731: INFO: Pod "downwardapi-volume-82fdf96b-8714-4ca0-8794-d66124414d0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.243504678s
STEP: Saw pod success
Dec 18 13:48:09.731: INFO: Pod "downwardapi-volume-82fdf96b-8714-4ca0-8794-d66124414d0d" satisfied condition "success or failure"
Dec 18 13:48:09.737: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-82fdf96b-8714-4ca0-8794-d66124414d0d container client-container: 
STEP: delete the pod
Dec 18 13:48:09.887: INFO: Waiting for pod downwardapi-volume-82fdf96b-8714-4ca0-8794-d66124414d0d to disappear
Dec 18 13:48:09.933: INFO: Pod downwardapi-volume-82fdf96b-8714-4ca0-8794-d66124414d0d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:48:09.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7634" for this suite.
Dec 18 13:48:15.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:48:16.147: INFO: namespace downward-api-7634 deletion completed in 6.191930343s

• [SLOW TEST:16.872 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:48:16.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:48:16.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1038" for this suite.
Dec 18 13:48:22.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:48:22.518: INFO: namespace services-1038 deletion completed in 6.28114823s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.371 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:48:22.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 18 13:48:31.886: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:48:32.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6199" for this suite.
Dec 18 13:48:38.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:48:38.361: INFO: namespace container-runtime-6199 deletion completed in 6.315401262s

• [SLOW TEST:15.843 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:48:38.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-4133
I1218 13:48:38.411163       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4133, replica count: 1
I1218 13:48:39.462467       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:48:40.463420       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:48:41.464156       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:48:42.464702       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:48:43.465226       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:48:44.465887       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:48:45.466438       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1218 13:48:46.467247       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 18 13:48:46.637: INFO: Created: latency-svc-vbqkn
Dec 18 13:48:46.656: INFO: Got endpoints: latency-svc-vbqkn [87.906826ms]
Dec 18 13:48:46.807: INFO: Created: latency-svc-gv5cb
Dec 18 13:48:46.847: INFO: Got endpoints: latency-svc-gv5cb [189.694561ms]
Dec 18 13:48:46.863: INFO: Created: latency-svc-4p55p
Dec 18 13:48:46.874: INFO: Got endpoints: latency-svc-4p55p [216.943471ms]
Dec 18 13:48:46.975: INFO: Created: latency-svc-l5ls9
Dec 18 13:48:46.989: INFO: Got endpoints: latency-svc-l5ls9 [332.873612ms]
Dec 18 13:48:47.034: INFO: Created: latency-svc-zwg5s
Dec 18 13:48:47.054: INFO: Got endpoints: latency-svc-zwg5s [396.417691ms]
Dec 18 13:48:47.118: INFO: Created: latency-svc-4w7ls
Dec 18 13:48:47.184: INFO: Created: latency-svc-29b5q
Dec 18 13:48:47.196: INFO: Got endpoints: latency-svc-4w7ls [538.920744ms]
Dec 18 13:48:47.339: INFO: Got endpoints: latency-svc-29b5q [681.727647ms]
Dec 18 13:48:47.350: INFO: Created: latency-svc-5p945
Dec 18 13:48:47.362: INFO: Got endpoints: latency-svc-5p945 [704.337027ms]
Dec 18 13:48:47.406: INFO: Created: latency-svc-ml8c8
Dec 18 13:48:47.415: INFO: Got endpoints: latency-svc-ml8c8 [758.028021ms]
Dec 18 13:48:47.557: INFO: Created: latency-svc-lklrs
Dec 18 13:48:47.573: INFO: Got endpoints: latency-svc-lklrs [914.822119ms]
Dec 18 13:48:47.777: INFO: Created: latency-svc-4xswz
Dec 18 13:48:47.843: INFO: Got endpoints: latency-svc-4xswz [1.185695682s]
Dec 18 13:48:47.845: INFO: Created: latency-svc-5hshk
Dec 18 13:48:47.868: INFO: Got endpoints: latency-svc-5hshk [1.209748207s]
Dec 18 13:48:47.968: INFO: Created: latency-svc-kbtzl
Dec 18 13:48:47.995: INFO: Got endpoints: latency-svc-kbtzl [1.337001251s]
Dec 18 13:48:48.045: INFO: Created: latency-svc-8kzzp
Dec 18 13:48:48.144: INFO: Got endpoints: latency-svc-8kzzp [1.485400205s]
Dec 18 13:48:48.158: INFO: Created: latency-svc-5jcdd
Dec 18 13:48:48.181: INFO: Got endpoints: latency-svc-5jcdd [1.522844011s]
Dec 18 13:48:48.223: INFO: Created: latency-svc-hghgd
Dec 18 13:48:48.230: INFO: Got endpoints: latency-svc-hghgd [1.571289328s]
Dec 18 13:48:48.324: INFO: Created: latency-svc-2xcjb
Dec 18 13:48:48.335: INFO: Got endpoints: latency-svc-2xcjb [1.487297947s]
Dec 18 13:48:48.388: INFO: Created: latency-svc-c644n
Dec 18 13:48:48.394: INFO: Got endpoints: latency-svc-c644n [1.518816786s]
Dec 18 13:48:48.569: INFO: Created: latency-svc-cjtd9
Dec 18 13:48:48.581: INFO: Got endpoints: latency-svc-cjtd9 [1.591726782s]
Dec 18 13:48:48.642: INFO: Created: latency-svc-kgt6z
Dec 18 13:48:48.665: INFO: Got endpoints: latency-svc-kgt6z [1.610625387s]
Dec 18 13:48:49.392: INFO: Created: latency-svc-d4wbs
Dec 18 13:48:49.435: INFO: Got endpoints: latency-svc-d4wbs [2.23853122s]
Dec 18 13:48:49.638: INFO: Created: latency-svc-65dvv
Dec 18 13:48:49.671: INFO: Got endpoints: latency-svc-65dvv [2.331660601s]
Dec 18 13:48:49.871: INFO: Created: latency-svc-nnqbj
Dec 18 13:48:49.896: INFO: Got endpoints: latency-svc-nnqbj [2.533051224s]
Dec 18 13:48:49.960: INFO: Created: latency-svc-gh87p
Dec 18 13:48:50.053: INFO: Got endpoints: latency-svc-gh87p [2.636955585s]
Dec 18 13:48:50.111: INFO: Created: latency-svc-8l6hm
Dec 18 13:48:50.122: INFO: Got endpoints: latency-svc-8l6hm [2.547968618s]
Dec 18 13:48:50.253: INFO: Created: latency-svc-6794m
Dec 18 13:48:50.254: INFO: Got endpoints: latency-svc-6794m [2.411397163s]
Dec 18 13:48:50.296: INFO: Created: latency-svc-flxcv
Dec 18 13:48:50.314: INFO: Got endpoints: latency-svc-flxcv [2.444857377s]
Dec 18 13:48:50.440: INFO: Created: latency-svc-w2wns
Dec 18 13:48:50.454: INFO: Got endpoints: latency-svc-w2wns [2.458002194s]
Dec 18 13:48:50.648: INFO: Created: latency-svc-zl6xr
Dec 18 13:48:50.695: INFO: Got endpoints: latency-svc-zl6xr [2.551210196s]
Dec 18 13:48:50.714: INFO: Created: latency-svc-khspz
Dec 18 13:48:50.726: INFO: Got endpoints: latency-svc-khspz [2.544212387s]
Dec 18 13:48:50.889: INFO: Created: latency-svc-mbxk5
Dec 18 13:48:50.951: INFO: Got endpoints: latency-svc-mbxk5 [2.721148791s]
Dec 18 13:48:50.953: INFO: Created: latency-svc-fvpxx
Dec 18 13:48:51.053: INFO: Got endpoints: latency-svc-fvpxx [2.717850425s]
Dec 18 13:48:51.066: INFO: Created: latency-svc-7gtg7
Dec 18 13:48:51.085: INFO: Got endpoints: latency-svc-7gtg7 [2.691576382s]
Dec 18 13:48:51.138: INFO: Created: latency-svc-hpr4w
Dec 18 13:48:51.263: INFO: Got endpoints: latency-svc-hpr4w [2.681032132s]
Dec 18 13:48:51.282: INFO: Created: latency-svc-r5rzh
Dec 18 13:48:51.301: INFO: Got endpoints: latency-svc-r5rzh [2.635926051s]
Dec 18 13:48:51.364: INFO: Created: latency-svc-4fh42
Dec 18 13:48:51.367: INFO: Got endpoints: latency-svc-4fh42 [1.931349295s]
Dec 18 13:48:51.467: INFO: Created: latency-svc-9njvw
Dec 18 13:48:51.480: INFO: Got endpoints: latency-svc-9njvw [1.80734231s]
Dec 18 13:48:51.542: INFO: Created: latency-svc-zws24
Dec 18 13:48:51.604: INFO: Got endpoints: latency-svc-zws24 [1.707565204s]
Dec 18 13:48:51.636: INFO: Created: latency-svc-k726b
Dec 18 13:48:51.666: INFO: Got endpoints: latency-svc-k726b [1.61253583s]
Dec 18 13:48:51.875: INFO: Created: latency-svc-t8rh2
Dec 18 13:48:51.937: INFO: Got endpoints: latency-svc-t8rh2 [1.815541629s]
Dec 18 13:48:51.946: INFO: Created: latency-svc-5dc5h
Dec 18 13:48:51.953: INFO: Got endpoints: latency-svc-5dc5h [1.698651854s]
Dec 18 13:48:52.052: INFO: Created: latency-svc-qggbh
Dec 18 13:48:52.070: INFO: Got endpoints: latency-svc-qggbh [1.755796323s]
Dec 18 13:48:52.111: INFO: Created: latency-svc-hhs4n
Dec 18 13:48:52.196: INFO: Got endpoints: latency-svc-hhs4n [1.741840149s]
Dec 18 13:48:52.261: INFO: Created: latency-svc-tbtpd
Dec 18 13:48:52.280: INFO: Got endpoints: latency-svc-tbtpd [1.584400242s]
Dec 18 13:48:52.412: INFO: Created: latency-svc-l9db8
Dec 18 13:48:52.413: INFO: Got endpoints: latency-svc-l9db8 [1.686297218s]
Dec 18 13:48:52.466: INFO: Created: latency-svc-gqwfs
Dec 18 13:48:52.470: INFO: Got endpoints: latency-svc-gqwfs [1.51867056s]
Dec 18 13:48:52.603: INFO: Created: latency-svc-k6dc5
Dec 18 13:48:52.626: INFO: Got endpoints: latency-svc-k6dc5 [1.572425672s]
Dec 18 13:48:52.773: INFO: Created: latency-svc-6p7s9
Dec 18 13:48:52.784: INFO: Got endpoints: latency-svc-6p7s9 [1.698395237s]
Dec 18 13:48:52.943: INFO: Created: latency-svc-42k4k
Dec 18 13:48:52.978: INFO: Got endpoints: latency-svc-42k4k [1.715029935s]
Dec 18 13:48:52.983: INFO: Created: latency-svc-x297s
Dec 18 13:48:52.988: INFO: Got endpoints: latency-svc-x297s [1.685951934s]
Dec 18 13:48:53.098: INFO: Created: latency-svc-tcsq6
Dec 18 13:48:53.105: INFO: Got endpoints: latency-svc-tcsq6 [1.737345848s]
Dec 18 13:48:53.258: INFO: Created: latency-svc-jch7q
Dec 18 13:48:53.264: INFO: Got endpoints: latency-svc-jch7q [1.784130593s]
Dec 18 13:48:53.292: INFO: Created: latency-svc-b7vg4
Dec 18 13:48:53.297: INFO: Got endpoints: latency-svc-b7vg4 [1.692320007s]
Dec 18 13:48:53.351: INFO: Created: latency-svc-97b9v
Dec 18 13:48:53.419: INFO: Got endpoints: latency-svc-97b9v [1.753321188s]
Dec 18 13:48:53.466: INFO: Created: latency-svc-zrbg8
Dec 18 13:48:53.475: INFO: Got endpoints: latency-svc-zrbg8 [1.537006931s]
Dec 18 13:48:53.560: INFO: Created: latency-svc-tn5wf
Dec 18 13:48:53.574: INFO: Got endpoints: latency-svc-tn5wf [1.620836828s]
Dec 18 13:48:53.626: INFO: Created: latency-svc-gmtjk
Dec 18 13:48:53.647: INFO: Got endpoints: latency-svc-gmtjk [1.576164524s]
Dec 18 13:48:53.793: INFO: Created: latency-svc-vt5fc
Dec 18 13:48:53.811: INFO: Got endpoints: latency-svc-vt5fc [1.614684216s]
Dec 18 13:48:53.834: INFO: Created: latency-svc-5429m
Dec 18 13:48:53.862: INFO: Got endpoints: latency-svc-5429m [1.581504989s]
Dec 18 13:48:53.970: INFO: Created: latency-svc-m4bd4
Dec 18 13:48:53.989: INFO: Got endpoints: latency-svc-m4bd4 [1.576371543s]
Dec 18 13:48:54.114: INFO: Created: latency-svc-dnzvm
Dec 18 13:48:54.164: INFO: Got endpoints: latency-svc-dnzvm [1.693386369s]
Dec 18 13:48:54.165: INFO: Created: latency-svc-44zsb
Dec 18 13:48:54.194: INFO: Got endpoints: latency-svc-44zsb [1.567605034s]
Dec 18 13:48:54.322: INFO: Created: latency-svc-rvnkt
Dec 18 13:48:54.322: INFO: Got endpoints: latency-svc-rvnkt [1.537216141s]
Dec 18 13:48:54.401: INFO: Created: latency-svc-fzp7k
Dec 18 13:48:54.488: INFO: Got endpoints: latency-svc-fzp7k [1.508910209s]
Dec 18 13:48:54.499: INFO: Created: latency-svc-m7mvj
Dec 18 13:48:54.508: INFO: Got endpoints: latency-svc-m7mvj [1.519968073s]
Dec 18 13:48:54.573: INFO: Created: latency-svc-wl9bx
Dec 18 13:48:54.582: INFO: Got endpoints: latency-svc-wl9bx [1.476614543s]
Dec 18 13:48:54.698: INFO: Created: latency-svc-cv7j7
Dec 18 13:48:54.721: INFO: Got endpoints: latency-svc-cv7j7 [1.456508119s]
Dec 18 13:48:54.903: INFO: Created: latency-svc-xb2h4
Dec 18 13:48:54.917: INFO: Got endpoints: latency-svc-xb2h4 [1.620449913s]
Dec 18 13:48:54.956: INFO: Created: latency-svc-kdzvx
Dec 18 13:48:54.966: INFO: Got endpoints: latency-svc-kdzvx [1.545850398s]
Dec 18 13:48:55.078: INFO: Created: latency-svc-8n9zz
Dec 18 13:48:55.080: INFO: Got endpoints: latency-svc-8n9zz [1.604452159s]
Dec 18 13:48:55.154: INFO: Created: latency-svc-9pss6
Dec 18 13:48:55.228: INFO: Got endpoints: latency-svc-9pss6 [1.653577084s]
Dec 18 13:48:55.262: INFO: Created: latency-svc-5gt5n
Dec 18 13:48:55.271: INFO: Got endpoints: latency-svc-5gt5n [1.624111455s]
Dec 18 13:48:55.331: INFO: Created: latency-svc-c6qhs
Dec 18 13:48:55.410: INFO: Got endpoints: latency-svc-c6qhs [1.598832861s]
Dec 18 13:48:55.431: INFO: Created: latency-svc-d89kb
Dec 18 13:48:55.439: INFO: Got endpoints: latency-svc-d89kb [1.57626225s]
Dec 18 13:48:55.483: INFO: Created: latency-svc-f8q4j
Dec 18 13:48:55.485: INFO: Got endpoints: latency-svc-f8q4j [1.49577482s]
Dec 18 13:48:55.585: INFO: Created: latency-svc-llw2t
Dec 18 13:48:55.587: INFO: Got endpoints: latency-svc-llw2t [1.423052947s]
Dec 18 13:48:55.638: INFO: Created: latency-svc-pxjr8
Dec 18 13:48:55.643: INFO: Got endpoints: latency-svc-pxjr8 [1.448059512s]
Dec 18 13:48:55.762: INFO: Created: latency-svc-g54sg
Dec 18 13:48:55.781: INFO: Got endpoints: latency-svc-g54sg [1.458369434s]
Dec 18 13:48:55.789: INFO: Created: latency-svc-zc2fb
Dec 18 13:48:55.794: INFO: Got endpoints: latency-svc-zc2fb [1.306359771s]
Dec 18 13:48:55.894: INFO: Created: latency-svc-xztgt
Dec 18 13:48:55.910: INFO: Got endpoints: latency-svc-xztgt [1.401103085s]
Dec 18 13:48:55.953: INFO: Created: latency-svc-bwwbl
Dec 18 13:48:55.963: INFO: Got endpoints: latency-svc-bwwbl [1.380660971s]
Dec 18 13:48:56.085: INFO: Created: latency-svc-dmvzb
Dec 18 13:48:56.104: INFO: Got endpoints: latency-svc-dmvzb [1.381873024s]
Dec 18 13:48:56.141: INFO: Created: latency-svc-6m5sb
Dec 18 13:48:56.147: INFO: Got endpoints: latency-svc-6m5sb [1.229440561s]
Dec 18 13:48:56.231: INFO: Created: latency-svc-46ncx
Dec 18 13:48:56.240: INFO: Got endpoints: latency-svc-46ncx [1.274108485s]
Dec 18 13:48:56.294: INFO: Created: latency-svc-7j89h
Dec 18 13:48:56.302: INFO: Got endpoints: latency-svc-7j89h [1.22246553s]
Dec 18 13:48:56.452: INFO: Created: latency-svc-mppnk
Dec 18 13:48:56.477: INFO: Got endpoints: latency-svc-mppnk [1.248675354s]
Dec 18 13:48:56.528: INFO: Created: latency-svc-927fv
Dec 18 13:48:56.613: INFO: Got endpoints: latency-svc-927fv [1.341784636s]
Dec 18 13:48:56.641: INFO: Created: latency-svc-7x2w4
Dec 18 13:48:56.647: INFO: Got endpoints: latency-svc-7x2w4 [1.236802746s]
Dec 18 13:48:56.701: INFO: Created: latency-svc-cq6hn
Dec 18 13:48:56.809: INFO: Got endpoints: latency-svc-cq6hn [1.369329758s]
Dec 18 13:48:56.828: INFO: Created: latency-svc-gxqrz
Dec 18 13:48:56.845: INFO: Got endpoints: latency-svc-gxqrz [1.359139839s]
Dec 18 13:48:56.896: INFO: Created: latency-svc-fbx6n
Dec 18 13:48:56.971: INFO: Got endpoints: latency-svc-fbx6n [1.383585227s]
Dec 18 13:48:56.995: INFO: Created: latency-svc-c567z
Dec 18 13:48:57.005: INFO: Got endpoints: latency-svc-c567z [1.362361645s]
Dec 18 13:48:57.069: INFO: Created: latency-svc-wxrjw
Dec 18 13:48:57.174: INFO: Got endpoints: latency-svc-wxrjw [1.393496328s]
Dec 18 13:48:57.183: INFO: Created: latency-svc-j6kb6
Dec 18 13:48:57.194: INFO: Got endpoints: latency-svc-j6kb6 [1.399365124s]
Dec 18 13:48:57.226: INFO: Created: latency-svc-6998k
Dec 18 13:48:57.249: INFO: Got endpoints: latency-svc-6998k [1.339498651s]
Dec 18 13:48:57.380: INFO: Created: latency-svc-n8hkb
Dec 18 13:48:57.399: INFO: Got endpoints: latency-svc-n8hkb [1.435748255s]
Dec 18 13:48:57.438: INFO: Created: latency-svc-vcwxk
Dec 18 13:48:57.442: INFO: Got endpoints: latency-svc-vcwxk [1.338647346s]
Dec 18 13:48:57.494: INFO: Created: latency-svc-pfxmb
Dec 18 13:48:57.572: INFO: Got endpoints: latency-svc-pfxmb [1.425087857s]
Dec 18 13:48:57.586: INFO: Created: latency-svc-b8jtj
Dec 18 13:48:57.594: INFO: Got endpoints: latency-svc-b8jtj [1.35350578s]
Dec 18 13:48:57.665: INFO: Created: latency-svc-7xb7p
Dec 18 13:48:57.988: INFO: Got endpoints: latency-svc-7xb7p [1.685722044s]
Dec 18 13:48:58.038: INFO: Created: latency-svc-6ftdd
Dec 18 13:48:58.072: INFO: Got endpoints: latency-svc-6ftdd [1.594960628s]
Dec 18 13:48:58.075: INFO: Created: latency-svc-v72k6
Dec 18 13:48:58.149: INFO: Got endpoints: latency-svc-v72k6 [1.535650513s]
Dec 18 13:48:58.165: INFO: Created: latency-svc-7xrtk
Dec 18 13:48:58.180: INFO: Got endpoints: latency-svc-7xrtk [1.532075713s]
Dec 18 13:48:58.224: INFO: Created: latency-svc-9qp79
Dec 18 13:48:58.237: INFO: Got endpoints: latency-svc-9qp79 [1.427712496s]
Dec 18 13:48:58.389: INFO: Created: latency-svc-lvc2q
Dec 18 13:48:58.419: INFO: Got endpoints: latency-svc-lvc2q [1.573706534s]
Dec 18 13:48:58.430: INFO: Created: latency-svc-qwg9h
Dec 18 13:48:58.456: INFO: Got endpoints: latency-svc-qwg9h [219.008884ms]
Dec 18 13:48:58.540: INFO: Created: latency-svc-2hlfk
Dec 18 13:48:58.554: INFO: Got endpoints: latency-svc-2hlfk [1.582799543s]
Dec 18 13:48:58.600: INFO: Created: latency-svc-5jd9z
Dec 18 13:48:58.601: INFO: Got endpoints: latency-svc-5jd9z [1.595823177s]
Dec 18 13:48:58.695: INFO: Created: latency-svc-f2bdj
Dec 18 13:48:58.700: INFO: Got endpoints: latency-svc-f2bdj [1.524760994s]
Dec 18 13:48:58.748: INFO: Created: latency-svc-9f9x9
Dec 18 13:48:58.912: INFO: Got endpoints: latency-svc-9f9x9 [1.718065239s]
Dec 18 13:48:58.929: INFO: Created: latency-svc-z9fl4
Dec 18 13:48:58.961: INFO: Got endpoints: latency-svc-z9fl4 [1.711422071s]
Dec 18 13:48:58.971: INFO: Created: latency-svc-dllc2
Dec 18 13:48:58.977: INFO: Got endpoints: latency-svc-dllc2 [1.577486915s]
Dec 18 13:48:59.131: INFO: Created: latency-svc-kqsx9
Dec 18 13:48:59.159: INFO: Got endpoints: latency-svc-kqsx9 [1.716062838s]
Dec 18 13:48:59.203: INFO: Created: latency-svc-t8nxx
Dec 18 13:48:59.214: INFO: Got endpoints: latency-svc-t8nxx [1.641578938s]
Dec 18 13:48:59.347: INFO: Created: latency-svc-h9xk5
Dec 18 13:48:59.348: INFO: Got endpoints: latency-svc-h9xk5 [1.754083052s]
Dec 18 13:48:59.414: INFO: Created: latency-svc-mqh75
Dec 18 13:48:59.435: INFO: Got endpoints: latency-svc-mqh75 [1.446416475s]
Dec 18 13:48:59.576: INFO: Created: latency-svc-mbwc7
Dec 18 13:48:59.587: INFO: Got endpoints: latency-svc-mbwc7 [1.514131334s]
Dec 18 13:48:59.629: INFO: Created: latency-svc-grrhp
Dec 18 13:48:59.636: INFO: Got endpoints: latency-svc-grrhp [1.486288272s]
Dec 18 13:48:59.772: INFO: Created: latency-svc-d8zvn
Dec 18 13:48:59.799: INFO: Got endpoints: latency-svc-d8zvn [1.619116146s]
Dec 18 13:48:59.978: INFO: Created: latency-svc-bmwp4
Dec 18 13:48:59.989: INFO: Got endpoints: latency-svc-bmwp4 [1.570465186s]
Dec 18 13:49:00.053: INFO: Created: latency-svc-qzcnv
Dec 18 13:49:00.184: INFO: Got endpoints: latency-svc-qzcnv [1.727678989s]
Dec 18 13:49:00.198: INFO: Created: latency-svc-8xxdx
Dec 18 13:49:00.217: INFO: Got endpoints: latency-svc-8xxdx [1.662761481s]
Dec 18 13:49:00.264: INFO: Created: latency-svc-jll8f
Dec 18 13:49:00.266: INFO: Got endpoints: latency-svc-jll8f [1.66434498s]
Dec 18 13:49:00.367: INFO: Created: latency-svc-9gbgw
Dec 18 13:49:00.415: INFO: Got endpoints: latency-svc-9gbgw [1.714700159s]
Dec 18 13:49:00.418: INFO: Created: latency-svc-nvjhz
Dec 18 13:49:00.435: INFO: Got endpoints: latency-svc-nvjhz [1.521734531s]
Dec 18 13:49:00.535: INFO: Created: latency-svc-mw9h7
Dec 18 13:49:00.539: INFO: Got endpoints: latency-svc-mw9h7 [1.576837888s]
Dec 18 13:49:00.597: INFO: Created: latency-svc-8qbh5
Dec 18 13:49:00.627: INFO: Created: latency-svc-rpnpc
Dec 18 13:49:00.708: INFO: Got endpoints: latency-svc-8qbh5 [1.730917682s]
Dec 18 13:49:00.729: INFO: Got endpoints: latency-svc-rpnpc [1.569686702s]
Dec 18 13:49:00.742: INFO: Created: latency-svc-v5qnx
Dec 18 13:49:00.775: INFO: Created: latency-svc-w79bh
Dec 18 13:49:00.778: INFO: Got endpoints: latency-svc-v5qnx [1.56326592s]
Dec 18 13:49:00.793: INFO: Got endpoints: latency-svc-w79bh [1.444863344s]
Dec 18 13:49:00.931: INFO: Created: latency-svc-jm258
Dec 18 13:49:00.992: INFO: Created: latency-svc-mwvfn
Dec 18 13:49:00.992: INFO: Got endpoints: latency-svc-jm258 [1.557139664s]
Dec 18 13:49:01.016: INFO: Got endpoints: latency-svc-mwvfn [1.42948635s]
Dec 18 13:49:01.195: INFO: Created: latency-svc-crvc9
Dec 18 13:49:01.213: INFO: Got endpoints: latency-svc-crvc9 [1.576586205s]
Dec 18 13:49:01.245: INFO: Created: latency-svc-xqqq5
Dec 18 13:49:01.267: INFO: Got endpoints: latency-svc-xqqq5 [1.466712282s]
Dec 18 13:49:01.356: INFO: Created: latency-svc-k9q6c
Dec 18 13:49:01.360: INFO: Got endpoints: latency-svc-k9q6c [1.370061603s]
Dec 18 13:49:01.403: INFO: Created: latency-svc-8rsrg
Dec 18 13:49:01.410: INFO: Got endpoints: latency-svc-8rsrg [1.226053s]
Dec 18 13:49:01.439: INFO: Created: latency-svc-d9pq5
Dec 18 13:49:01.449: INFO: Got endpoints: latency-svc-d9pq5 [1.231086767s]
Dec 18 13:49:01.562: INFO: Created: latency-svc-b8ddn
Dec 18 13:49:01.576: INFO: Got endpoints: latency-svc-b8ddn [1.309738958s]
Dec 18 13:49:01.620: INFO: Created: latency-svc-z68pb
Dec 18 13:49:01.627: INFO: Got endpoints: latency-svc-z68pb [1.2112568s]
Dec 18 13:49:01.742: INFO: Created: latency-svc-czz4g
Dec 18 13:49:01.747: INFO: Got endpoints: latency-svc-czz4g [1.311596513s]
Dec 18 13:49:01.810: INFO: Created: latency-svc-ntwn8
Dec 18 13:49:01.913: INFO: Got endpoints: latency-svc-ntwn8 [1.374373358s]
Dec 18 13:49:01.923: INFO: Created: latency-svc-gn8nx
Dec 18 13:49:01.960: INFO: Got endpoints: latency-svc-gn8nx [1.251275328s]
Dec 18 13:49:02.186: INFO: Created: latency-svc-pmnm2
Dec 18 13:49:02.282: INFO: Created: latency-svc-tsj8g
Dec 18 13:49:02.347: INFO: Got endpoints: latency-svc-tsj8g [1.56908339s]
Dec 18 13:49:02.348: INFO: Got endpoints: latency-svc-pmnm2 [1.618903661s]
Dec 18 13:49:02.417: INFO: Created: latency-svc-6jc6m
Dec 18 13:49:02.593: INFO: Got endpoints: latency-svc-6jc6m [1.799346148s]
Dec 18 13:49:02.597: INFO: Created: latency-svc-z448k
Dec 18 13:49:02.621: INFO: Got endpoints: latency-svc-z448k [1.628057121s]
Dec 18 13:49:02.687: INFO: Created: latency-svc-9xqjf
Dec 18 13:49:02.807: INFO: Created: latency-svc-vhdsh
Dec 18 13:49:02.824: INFO: Got endpoints: latency-svc-9xqjf [1.806986723s]
Dec 18 13:49:02.824: INFO: Got endpoints: latency-svc-vhdsh [1.611308359s]
Dec 18 13:49:03.048: INFO: Created: latency-svc-j9cbw
Dec 18 13:49:03.128: INFO: Got endpoints: latency-svc-j9cbw [1.860671634s]
Dec 18 13:49:03.130: INFO: Created: latency-svc-r8mnt
Dec 18 13:49:03.233: INFO: Got endpoints: latency-svc-r8mnt [1.873069447s]
Dec 18 13:49:03.248: INFO: Created: latency-svc-zmbjp
Dec 18 13:49:03.256: INFO: Got endpoints: latency-svc-zmbjp [1.845764208s]
Dec 18 13:49:03.326: INFO: Created: latency-svc-9qlvf
Dec 18 13:49:03.401: INFO: Got endpoints: latency-svc-9qlvf [1.951734402s]
Dec 18 13:49:03.437: INFO: Created: latency-svc-hd5c2
Dec 18 13:49:03.441: INFO: Got endpoints: latency-svc-hd5c2 [1.865359708s]
Dec 18 13:49:03.486: INFO: Created: latency-svc-q7md7
Dec 18 13:49:03.487: INFO: Got endpoints: latency-svc-q7md7 [1.859919114s]
Dec 18 13:49:03.571: INFO: Created: latency-svc-xdkh5
Dec 18 13:49:03.577: INFO: Got endpoints: latency-svc-xdkh5 [1.829941289s]
Dec 18 13:49:03.618: INFO: Created: latency-svc-hr7zg
Dec 18 13:49:03.631: INFO: Got endpoints: latency-svc-hr7zg [1.7164876s]
Dec 18 13:49:03.670: INFO: Created: latency-svc-bf5mq
Dec 18 13:49:03.766: INFO: Got endpoints: latency-svc-bf5mq [1.805818332s]
Dec 18 13:49:03.855: INFO: Created: latency-svc-2xdb5
Dec 18 13:49:03.856: INFO: Got endpoints: latency-svc-2xdb5 [1.507526716s]
Dec 18 13:49:03.983: INFO: Created: latency-svc-n9qtl
Dec 18 13:49:04.010: INFO: Got endpoints: latency-svc-n9qtl [1.660787629s]
Dec 18 13:49:04.010: INFO: Created: latency-svc-5flls
Dec 18 13:49:04.024: INFO: Got endpoints: latency-svc-5flls [1.430349194s]
Dec 18 13:49:04.160: INFO: Created: latency-svc-rngwx
Dec 18 13:49:04.198: INFO: Got endpoints: latency-svc-rngwx [1.577256542s]
Dec 18 13:49:04.203: INFO: Created: latency-svc-hld55
Dec 18 13:49:04.230: INFO: Got endpoints: latency-svc-hld55 [1.404261005s]
Dec 18 13:49:04.294: INFO: Created: latency-svc-cjfzz
Dec 18 13:49:04.300: INFO: Got endpoints: latency-svc-cjfzz [1.475790506s]
Dec 18 13:49:04.351: INFO: Created: latency-svc-ml2b4
Dec 18 13:49:04.356: INFO: Got endpoints: latency-svc-ml2b4 [1.227807732s]
Dec 18 13:49:04.494: INFO: Created: latency-svc-95gvt
Dec 18 13:49:04.502: INFO: Got endpoints: latency-svc-95gvt [1.268075853s]
Dec 18 13:49:04.591: INFO: Created: latency-svc-259qt
Dec 18 13:49:04.690: INFO: Got endpoints: latency-svc-259qt [1.433995254s]
Dec 18 13:49:04.727: INFO: Created: latency-svc-zgrhf
Dec 18 13:49:04.926: INFO: Got endpoints: latency-svc-zgrhf [1.525275129s]
Dec 18 13:49:04.936: INFO: Created: latency-svc-cv82g
Dec 18 13:49:05.212: INFO: Got endpoints: latency-svc-cv82g [1.770207206s]
Dec 18 13:49:05.232: INFO: Created: latency-svc-mssth
Dec 18 13:49:05.233: INFO: Got endpoints: latency-svc-mssth [1.745797113s]
Dec 18 13:49:05.304: INFO: Created: latency-svc-qp9nz
Dec 18 13:49:05.486: INFO: Got endpoints: latency-svc-qp9nz [1.908612304s]
Dec 18 13:49:05.490: INFO: Created: latency-svc-zlllt
Dec 18 13:49:05.504: INFO: Got endpoints: latency-svc-zlllt [1.873481954s]
Dec 18 13:49:05.538: INFO: Created: latency-svc-8kd77
Dec 18 13:49:05.598: INFO: Got endpoints: latency-svc-8kd77 [1.830884121s]
Dec 18 13:49:05.630: INFO: Created: latency-svc-9mxbj
Dec 18 13:49:05.661: INFO: Got endpoints: latency-svc-9mxbj [1.804609882s]
Dec 18 13:49:05.665: INFO: Created: latency-svc-pw5bp
Dec 18 13:49:05.670: INFO: Got endpoints: latency-svc-pw5bp [1.659960616s]
Dec 18 13:49:05.772: INFO: Created: latency-svc-l5mpl
Dec 18 13:49:05.776: INFO: Got endpoints: latency-svc-l5mpl [1.75196658s]
Dec 18 13:49:05.842: INFO: Created: latency-svc-ndmtx
Dec 18 13:49:05.859: INFO: Got endpoints: latency-svc-ndmtx [1.660229328s]
Dec 18 13:49:06.006: INFO: Created: latency-svc-gpv2c
Dec 18 13:49:06.032: INFO: Got endpoints: latency-svc-gpv2c [1.800945558s]
Dec 18 13:49:06.040: INFO: Created: latency-svc-pj659
Dec 18 13:49:06.049: INFO: Got endpoints: latency-svc-pj659 [1.74826185s]
Dec 18 13:49:06.160: INFO: Created: latency-svc-4tgrx
Dec 18 13:49:06.163: INFO: Got endpoints: latency-svc-4tgrx [1.806532698s]
Dec 18 13:49:06.201: INFO: Created: latency-svc-k77bx
Dec 18 13:49:06.206: INFO: Got endpoints: latency-svc-k77bx [1.703709728s]
Dec 18 13:49:06.330: INFO: Created: latency-svc-hmmfg
Dec 18 13:49:06.334: INFO: Got endpoints: latency-svc-hmmfg [1.642904426s]
Dec 18 13:49:06.386: INFO: Created: latency-svc-7c9vz
Dec 18 13:49:06.401: INFO: Got endpoints: latency-svc-7c9vz [1.474605008s]
Dec 18 13:49:06.504: INFO: Created: latency-svc-7n4vg
Dec 18 13:49:06.562: INFO: Got endpoints: latency-svc-7n4vg [1.349454973s]
Dec 18 13:49:06.572: INFO: Created: latency-svc-fvlbv
Dec 18 13:49:06.580: INFO: Got endpoints: latency-svc-fvlbv [1.347004087s]
Dec 18 13:49:06.691: INFO: Created: latency-svc-nfhwb
Dec 18 13:49:06.719: INFO: Got endpoints: latency-svc-nfhwb [1.233405135s]
Dec 18 13:49:06.763: INFO: Created: latency-svc-rgqxq
Dec 18 13:49:06.875: INFO: Got endpoints: latency-svc-rgqxq [1.370996641s]
Dec 18 13:49:06.911: INFO: Created: latency-svc-9sdh6
Dec 18 13:49:06.929: INFO: Got endpoints: latency-svc-9sdh6 [1.331355898s]
Dec 18 13:49:07.022: INFO: Created: latency-svc-2vjfl
Dec 18 13:49:07.085: INFO: Got endpoints: latency-svc-2vjfl [1.424009494s]
Dec 18 13:49:07.089: INFO: Created: latency-svc-tc92x
Dec 18 13:49:07.096: INFO: Got endpoints: latency-svc-tc92x [1.425493268s]
Dec 18 13:49:07.224: INFO: Created: latency-svc-98kq2
Dec 18 13:49:07.261: INFO: Got endpoints: latency-svc-98kq2 [1.484641232s]
Dec 18 13:49:07.369: INFO: Created: latency-svc-zr6kw
Dec 18 13:49:07.369: INFO: Got endpoints: latency-svc-zr6kw [1.509334281s]
Dec 18 13:49:07.403: INFO: Created: latency-svc-6z2vw
Dec 18 13:49:07.412: INFO: Got endpoints: latency-svc-6z2vw [1.38016224s]
Dec 18 13:49:07.454: INFO: Created: latency-svc-2rdzr
Dec 18 13:49:07.536: INFO: Got endpoints: latency-svc-2rdzr [1.486393666s]
Dec 18 13:49:07.566: INFO: Created: latency-svc-qmwm2
Dec 18 13:49:07.568: INFO: Got endpoints: latency-svc-qmwm2 [1.405406017s]
Dec 18 13:49:07.628: INFO: Created: latency-svc-zw2pj
Dec 18 13:49:07.707: INFO: Got endpoints: latency-svc-zw2pj [1.50077786s]
Dec 18 13:49:07.738: INFO: Created: latency-svc-7lcsb
Dec 18 13:49:07.752: INFO: Got endpoints: latency-svc-7lcsb [1.41801224s]
Dec 18 13:49:07.893: INFO: Created: latency-svc-wbw4b
Dec 18 13:49:07.924: INFO: Got endpoints: latency-svc-wbw4b [1.5224649s]
Dec 18 13:49:07.969: INFO: Created: latency-svc-8kb89
Dec 18 13:49:08.065: INFO: Got endpoints: latency-svc-8kb89 [1.502574706s]
Dec 18 13:49:08.130: INFO: Created: latency-svc-bpjsb
Dec 18 13:49:08.141: INFO: Got endpoints: latency-svc-bpjsb [1.560585248s]
Dec 18 13:49:08.236: INFO: Created: latency-svc-gzlkg
Dec 18 13:49:08.250: INFO: Got endpoints: latency-svc-gzlkg [1.5307925s]
Dec 18 13:49:08.304: INFO: Created: latency-svc-52qfn
Dec 18 13:49:08.306: INFO: Got endpoints: latency-svc-52qfn [1.429943973s]
Dec 18 13:49:08.306: INFO: Latencies: [189.694561ms 216.943471ms 219.008884ms 332.873612ms 396.417691ms 538.920744ms 681.727647ms 704.337027ms 758.028021ms 914.822119ms 1.185695682s 1.209748207s 1.2112568s 1.22246553s 1.226053s 1.227807732s 1.229440561s 1.231086767s 1.233405135s 1.236802746s 1.248675354s 1.251275328s 1.268075853s 1.274108485s 1.306359771s 1.309738958s 1.311596513s 1.331355898s 1.337001251s 1.338647346s 1.339498651s 1.341784636s 1.347004087s 1.349454973s 1.35350578s 1.359139839s 1.362361645s 1.369329758s 1.370061603s 1.370996641s 1.374373358s 1.38016224s 1.380660971s 1.381873024s 1.383585227s 1.393496328s 1.399365124s 1.401103085s 1.404261005s 1.405406017s 1.41801224s 1.423052947s 1.424009494s 1.425087857s 1.425493268s 1.427712496s 1.42948635s 1.429943973s 1.430349194s 1.433995254s 1.435748255s 1.444863344s 1.446416475s 1.448059512s 1.456508119s 1.458369434s 1.466712282s 1.474605008s 1.475790506s 1.476614543s 1.484641232s 1.485400205s 1.486288272s 1.486393666s 1.487297947s 1.49577482s 1.50077786s 1.502574706s 1.507526716s 1.508910209s 1.509334281s 1.514131334s 1.51867056s 1.518816786s 1.519968073s 1.521734531s 1.5224649s 1.522844011s 1.524760994s 1.525275129s 1.5307925s 1.532075713s 1.535650513s 1.537006931s 1.537216141s 1.545850398s 1.557139664s 1.560585248s 1.56326592s 1.567605034s 1.56908339s 1.569686702s 1.570465186s 1.571289328s 1.572425672s 1.573706534s 1.576164524s 1.57626225s 1.576371543s 1.576586205s 1.576837888s 1.577256542s 1.577486915s 1.581504989s 1.582799543s 1.584400242s 1.591726782s 1.594960628s 1.595823177s 1.598832861s 1.604452159s 1.610625387s 1.611308359s 1.61253583s 1.614684216s 1.618903661s 1.619116146s 1.620449913s 1.620836828s 1.624111455s 1.628057121s 1.641578938s 1.642904426s 1.653577084s 1.659960616s 1.660229328s 1.660787629s 1.662761481s 1.66434498s 1.685722044s 1.685951934s 1.686297218s 1.692320007s 1.693386369s 1.698395237s 1.698651854s 1.703709728s 1.707565204s 1.711422071s 1.714700159s 1.715029935s 1.716062838s 1.7164876s 1.718065239s 1.727678989s 1.730917682s 1.737345848s 1.741840149s 1.745797113s 1.74826185s 1.75196658s 1.753321188s 1.754083052s 1.755796323s 1.770207206s 1.784130593s 1.799346148s 1.800945558s 1.804609882s 1.805818332s 1.806532698s 1.806986723s 1.80734231s 1.815541629s 1.829941289s 1.830884121s 1.845764208s 1.859919114s 1.860671634s 1.865359708s 1.873069447s 1.873481954s 1.908612304s 1.931349295s 1.951734402s 2.23853122s 2.331660601s 2.411397163s 2.444857377s 2.458002194s 2.533051224s 2.544212387s 2.547968618s 2.551210196s 2.635926051s 2.636955585s 2.681032132s 2.691576382s 2.717850425s 2.721148791s]
Dec 18 13:49:08.307: INFO: 50 %ile: 1.56908339s
Dec 18 13:49:08.307: INFO: 90 %ile: 1.873069447s
Dec 18 13:49:08.307: INFO: 99 %ile: 2.717850425s
Dec 18 13:49:08.307: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:49:08.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-4133" for this suite.
Dec 18 13:50:08.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:50:08.552: INFO: namespace svc-latency-4133 deletion completed in 1m0.236321005s

• [SLOW TEST:90.190 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:50:08.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 18 13:50:08.716: INFO: Waiting up to 5m0s for pod "pod-ef490119-74c9-4c5a-982d-88e9d33a7d6c" in namespace "emptydir-303" to be "success or failure"
Dec 18 13:50:08.723: INFO: Pod "pod-ef490119-74c9-4c5a-982d-88e9d33a7d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.980164ms
Dec 18 13:50:10.731: INFO: Pod "pod-ef490119-74c9-4c5a-982d-88e9d33a7d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015531345s
Dec 18 13:50:12.753: INFO: Pod "pod-ef490119-74c9-4c5a-982d-88e9d33a7d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037338447s
Dec 18 13:50:14.759: INFO: Pod "pod-ef490119-74c9-4c5a-982d-88e9d33a7d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043231203s
Dec 18 13:50:16.770: INFO: Pod "pod-ef490119-74c9-4c5a-982d-88e9d33a7d6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054628095s
STEP: Saw pod success
Dec 18 13:50:16.771: INFO: Pod "pod-ef490119-74c9-4c5a-982d-88e9d33a7d6c" satisfied condition "success or failure"
Dec 18 13:50:16.777: INFO: Trying to get logs from node iruya-node pod pod-ef490119-74c9-4c5a-982d-88e9d33a7d6c container test-container: 
STEP: delete the pod
Dec 18 13:50:16.840: INFO: Waiting for pod pod-ef490119-74c9-4c5a-982d-88e9d33a7d6c to disappear
Dec 18 13:50:16.866: INFO: Pod pod-ef490119-74c9-4c5a-982d-88e9d33a7d6c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:50:16.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-303" for this suite.
Dec 18 13:50:22.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:50:23.060: INFO: namespace emptydir-303 deletion completed in 6.121170381s

• [SLOW TEST:14.506 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:50:23.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 13:50:23.221: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc742081-9dcf-4f31-a324-6bfd2a472a95" in namespace "downward-api-4405" to be "success or failure"
Dec 18 13:50:23.228: INFO: Pod "downwardapi-volume-bc742081-9dcf-4f31-a324-6bfd2a472a95": Phase="Pending", Reason="", readiness=false. Elapsed: 6.524673ms
Dec 18 13:50:25.235: INFO: Pod "downwardapi-volume-bc742081-9dcf-4f31-a324-6bfd2a472a95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013860318s
Dec 18 13:50:27.264: INFO: Pod "downwardapi-volume-bc742081-9dcf-4f31-a324-6bfd2a472a95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042831899s
Dec 18 13:50:29.275: INFO: Pod "downwardapi-volume-bc742081-9dcf-4f31-a324-6bfd2a472a95": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05355973s
Dec 18 13:50:31.285: INFO: Pod "downwardapi-volume-bc742081-9dcf-4f31-a324-6bfd2a472a95": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063288561s
Dec 18 13:50:33.292: INFO: Pod "downwardapi-volume-bc742081-9dcf-4f31-a324-6bfd2a472a95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070544934s
STEP: Saw pod success
Dec 18 13:50:33.292: INFO: Pod "downwardapi-volume-bc742081-9dcf-4f31-a324-6bfd2a472a95" satisfied condition "success or failure"
Dec 18 13:50:33.295: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bc742081-9dcf-4f31-a324-6bfd2a472a95 container client-container: 
STEP: delete the pod
Dec 18 13:50:33.756: INFO: Waiting for pod downwardapi-volume-bc742081-9dcf-4f31-a324-6bfd2a472a95 to disappear
Dec 18 13:50:33.770: INFO: Pod downwardapi-volume-bc742081-9dcf-4f31-a324-6bfd2a472a95 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:50:33.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4405" for this suite.
Dec 18 13:50:39.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:50:39.981: INFO: namespace downward-api-4405 deletion completed in 6.185387349s

• [SLOW TEST:16.921 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:50:39.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Dec 18 13:50:40.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3033 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 18 13:50:52.317: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 18 13:50:52.317: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:50:54.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3033" for this suite.
Dec 18 13:51:00.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:51:00.478: INFO: namespace kubectl-3033 deletion completed in 6.146470852s

• [SLOW TEST:20.496 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:51:00.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 13:51:00.639: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Dec 18 13:51:04.034: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:51:05.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8300" for this suite.
Dec 18 13:51:13.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:51:14.603: INFO: namespace replication-controller-8300 deletion completed in 9.527662146s

• [SLOW TEST:14.123 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:51:14.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-8533b800-09c0-4bad-a8cb-9c3bd8fc5004
STEP: Creating a pod to test consume configMaps
Dec 18 13:51:15.173: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6ceb49d5-69f9-4b79-87d8-ccede7081369" in namespace "projected-9747" to be "success or failure"
Dec 18 13:51:15.194: INFO: Pod "pod-projected-configmaps-6ceb49d5-69f9-4b79-87d8-ccede7081369": Phase="Pending", Reason="", readiness=false. Elapsed: 19.785682ms
Dec 18 13:51:17.204: INFO: Pod "pod-projected-configmaps-6ceb49d5-69f9-4b79-87d8-ccede7081369": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029776765s
Dec 18 13:51:19.215: INFO: Pod "pod-projected-configmaps-6ceb49d5-69f9-4b79-87d8-ccede7081369": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040920059s
Dec 18 13:51:21.236: INFO: Pod "pod-projected-configmaps-6ceb49d5-69f9-4b79-87d8-ccede7081369": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061592077s
Dec 18 13:51:23.252: INFO: Pod "pod-projected-configmaps-6ceb49d5-69f9-4b79-87d8-ccede7081369": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077876208s
Dec 18 13:51:25.267: INFO: Pod "pod-projected-configmaps-6ceb49d5-69f9-4b79-87d8-ccede7081369": Phase="Running", Reason="", readiness=true. Elapsed: 10.092760621s
Dec 18 13:51:27.282: INFO: Pod "pod-projected-configmaps-6ceb49d5-69f9-4b79-87d8-ccede7081369": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.108074005s
STEP: Saw pod success
Dec 18 13:51:27.282: INFO: Pod "pod-projected-configmaps-6ceb49d5-69f9-4b79-87d8-ccede7081369" satisfied condition "success or failure"
Dec 18 13:51:27.288: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-6ceb49d5-69f9-4b79-87d8-ccede7081369 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 18 13:51:27.355: INFO: Waiting for pod pod-projected-configmaps-6ceb49d5-69f9-4b79-87d8-ccede7081369 to disappear
Dec 18 13:51:27.366: INFO: Pod pod-projected-configmaps-6ceb49d5-69f9-4b79-87d8-ccede7081369 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:51:27.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9747" for this suite.
Dec 18 13:51:33.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:51:33.469: INFO: namespace projected-9747 deletion completed in 6.094089404s

• [SLOW TEST:18.864 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:51:33.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 18 13:51:33.542: INFO: Waiting up to 5m0s for pod "pod-a93f577e-a6d3-4db2-b9d6-d684616619fd" in namespace "emptydir-6614" to be "success or failure"
Dec 18 13:51:33.548: INFO: Pod "pod-a93f577e-a6d3-4db2-b9d6-d684616619fd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.143734ms
Dec 18 13:51:35.562: INFO: Pod "pod-a93f577e-a6d3-4db2-b9d6-d684616619fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01978687s
Dec 18 13:51:37.578: INFO: Pod "pod-a93f577e-a6d3-4db2-b9d6-d684616619fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035186463s
Dec 18 13:51:39.595: INFO: Pod "pod-a93f577e-a6d3-4db2-b9d6-d684616619fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052620739s
Dec 18 13:51:41.620: INFO: Pod "pod-a93f577e-a6d3-4db2-b9d6-d684616619fd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077234374s
Dec 18 13:51:43.668: INFO: Pod "pod-a93f577e-a6d3-4db2-b9d6-d684616619fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.125826415s
STEP: Saw pod success
Dec 18 13:51:43.669: INFO: Pod "pod-a93f577e-a6d3-4db2-b9d6-d684616619fd" satisfied condition "success or failure"
Dec 18 13:51:43.679: INFO: Trying to get logs from node iruya-node pod pod-a93f577e-a6d3-4db2-b9d6-d684616619fd container test-container: 
STEP: delete the pod
Dec 18 13:51:43.753: INFO: Waiting for pod pod-a93f577e-a6d3-4db2-b9d6-d684616619fd to disappear
Dec 18 13:51:43.761: INFO: Pod pod-a93f577e-a6d3-4db2-b9d6-d684616619fd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:51:43.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6614" for this suite.
Dec 18 13:51:49.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:51:50.010: INFO: namespace emptydir-6614 deletion completed in 6.166471878s

• [SLOW TEST:16.540 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:51:50.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-r2rz
STEP: Creating a pod to test atomic-volume-subpath
Dec 18 13:51:50.280: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-r2rz" in namespace "subpath-9824" to be "success or failure"
Dec 18 13:51:50.303: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Pending", Reason="", readiness=false. Elapsed: 22.615591ms
Dec 18 13:51:52.312: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031931311s
Dec 18 13:51:54.319: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038463471s
Dec 18 13:51:56.336: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055459016s
Dec 18 13:51:58.346: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065415421s
Dec 18 13:52:00.360: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Running", Reason="", readiness=true. Elapsed: 10.079835969s
Dec 18 13:52:02.370: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Running", Reason="", readiness=true. Elapsed: 12.089488285s
Dec 18 13:52:04.381: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Running", Reason="", readiness=true. Elapsed: 14.100906317s
Dec 18 13:52:06.391: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Running", Reason="", readiness=true. Elapsed: 16.110574339s
Dec 18 13:52:08.400: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Running", Reason="", readiness=true. Elapsed: 18.119526825s
Dec 18 13:52:10.414: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Running", Reason="", readiness=true. Elapsed: 20.133213515s
Dec 18 13:52:12.430: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Running", Reason="", readiness=true. Elapsed: 22.149520895s
Dec 18 13:52:14.442: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Running", Reason="", readiness=true. Elapsed: 24.161776477s
Dec 18 13:52:16.453: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Running", Reason="", readiness=true. Elapsed: 26.172426585s
Dec 18 13:52:18.470: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Running", Reason="", readiness=true. Elapsed: 28.189689712s
Dec 18 13:52:20.485: INFO: Pod "pod-subpath-test-projected-r2rz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.20462356s
STEP: Saw pod success
Dec 18 13:52:20.485: INFO: Pod "pod-subpath-test-projected-r2rz" satisfied condition "success or failure"
Dec 18 13:52:20.492: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-r2rz container test-container-subpath-projected-r2rz: 
STEP: delete the pod
Dec 18 13:52:20.675: INFO: Waiting for pod pod-subpath-test-projected-r2rz to disappear
Dec 18 13:52:20.696: INFO: Pod pod-subpath-test-projected-r2rz no longer exists
STEP: Deleting pod pod-subpath-test-projected-r2rz
Dec 18 13:52:20.697: INFO: Deleting pod "pod-subpath-test-projected-r2rz" in namespace "subpath-9824"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:52:20.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9824" for this suite.
Dec 18 13:52:26.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:52:26.950: INFO: namespace subpath-9824 deletion completed in 6.236881588s

• [SLOW TEST:36.939 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:52:26.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Dec 18 13:52:27.138: INFO: Waiting up to 5m0s for pod "client-containers-dbe1da7c-3ea2-4e3d-9ac1-2021062280cd" in namespace "containers-7385" to be "success or failure"
Dec 18 13:52:27.157: INFO: Pod "client-containers-dbe1da7c-3ea2-4e3d-9ac1-2021062280cd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.609386ms
Dec 18 13:52:29.173: INFO: Pod "client-containers-dbe1da7c-3ea2-4e3d-9ac1-2021062280cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035088081s
Dec 18 13:52:31.186: INFO: Pod "client-containers-dbe1da7c-3ea2-4e3d-9ac1-2021062280cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047571733s
Dec 18 13:52:33.194: INFO: Pod "client-containers-dbe1da7c-3ea2-4e3d-9ac1-2021062280cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055666552s
Dec 18 13:52:35.201: INFO: Pod "client-containers-dbe1da7c-3ea2-4e3d-9ac1-2021062280cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062370384s
Dec 18 13:52:37.207: INFO: Pod "client-containers-dbe1da7c-3ea2-4e3d-9ac1-2021062280cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069340511s
STEP: Saw pod success
Dec 18 13:52:37.208: INFO: Pod "client-containers-dbe1da7c-3ea2-4e3d-9ac1-2021062280cd" satisfied condition "success or failure"
Dec 18 13:52:37.211: INFO: Trying to get logs from node iruya-node pod client-containers-dbe1da7c-3ea2-4e3d-9ac1-2021062280cd container test-container: 
STEP: delete the pod
Dec 18 13:52:37.290: INFO: Waiting for pod client-containers-dbe1da7c-3ea2-4e3d-9ac1-2021062280cd to disappear
Dec 18 13:52:37.298: INFO: Pod client-containers-dbe1da7c-3ea2-4e3d-9ac1-2021062280cd no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:52:37.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7385" for this suite.
Dec 18 13:52:43.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:52:43.492: INFO: namespace containers-7385 deletion completed in 6.186076553s

• [SLOW TEST:16.541 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:52:43.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 18 13:52:43.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1726'
Dec 18 13:52:43.762: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 18 13:52:43.762: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Dec 18 13:52:45.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1726'
Dec 18 13:52:45.972: INFO: stderr: ""
Dec 18 13:52:45.972: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:52:45.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1726" for this suite.
Dec 18 13:53:08.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:53:08.238: INFO: namespace kubectl-1726 deletion completed in 22.256788319s

• [SLOW TEST:24.746 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:53:08.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3339
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-3339
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3339
Dec 18 13:53:08.379: INFO: Found 0 stateful pods, waiting for 1
Dec 18 13:53:18.390: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 18 13:53:18.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3339 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 13:53:19.184: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 18 13:53:19.185: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 13:53:19.185: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 13:53:19.195: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 13:53:19.196: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 13:53:19.243: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999995289s
Dec 18 13:53:20.253: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988753499s
Dec 18 13:53:21.268: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.978913909s
Dec 18 13:53:22.282: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.963629436s
Dec 18 13:53:23.300: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.950028278s
Dec 18 13:53:24.598: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.931508231s
Dec 18 13:53:25.606: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.633775352s
Dec 18 13:53:26.616: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.625725157s
Dec 18 13:53:27.624: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.615468392s
Dec 18 13:53:28.650: INFO: Verifying statefulset ss doesn't scale past 1 for another 607.585953ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3339
Dec 18 13:53:29.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3339 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 13:53:30.257: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 18 13:53:30.258: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 13:53:30.258: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 13:53:30.270: INFO: Found 1 stateful pods, waiting for 3
Dec 18 13:53:40.285: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:53:40.285: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:53:40.285: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 18 13:53:50.297: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:53:50.297: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 13:53:50.297: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 18 13:53:50.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3339 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 13:53:50.920: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 18 13:53:50.921: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 13:53:50.921: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 13:53:50.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3339 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 13:53:51.627: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 18 13:53:51.627: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 13:53:51.627: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 13:53:51.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3339 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 13:53:52.353: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 18 13:53:52.353: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 13:53:52.353: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 13:53:52.353: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 13:53:52.362: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 18 13:54:02.409: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 13:54:02.409: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 13:54:02.409: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 13:54:02.450: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999995786s
Dec 18 13:54:03.462: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989730498s
Dec 18 13:54:04.489: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.977727837s
Dec 18 13:54:05.502: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.950096612s
Dec 18 13:54:06.522: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.937857399s
Dec 18 13:54:07.706: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.917675918s
Dec 18 13:54:08.725: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.733175497s
Dec 18 13:54:09.758: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.714699596s
Dec 18 13:54:10.773: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.681593631s
Dec 18 13:54:11.802: INFO: Verifying statefulset ss doesn't scale past 3 for another 666.692288ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3339
Dec 18 13:54:12.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3339 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 13:54:13.405: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 18 13:54:13.405: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 13:54:13.405: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 13:54:13.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3339 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 13:54:13.838: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 18 13:54:13.839: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 13:54:13.839: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 13:54:13.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3339 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 13:54:14.433: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 18 13:54:14.433: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 13:54:14.433: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 13:54:14.433: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 18 13:54:45.082: INFO: Deleting all statefulset in ns statefulset-3339
Dec 18 13:54:45.096: INFO: Scaling statefulset ss to 0
Dec 18 13:54:45.145: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 13:54:45.150: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:54:45.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3339" for this suite.
Dec 18 13:54:51.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:54:51.444: INFO: namespace statefulset-3339 deletion completed in 6.229702649s

• [SLOW TEST:103.206 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:54:51.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Dec 18 13:54:51.582: INFO: Waiting up to 5m0s for pod "client-containers-1fd378d5-d033-4843-aaf2-5edd2c608647" in namespace "containers-8925" to be "success or failure"
Dec 18 13:54:51.593: INFO: Pod "client-containers-1fd378d5-d033-4843-aaf2-5edd2c608647": Phase="Pending", Reason="", readiness=false. Elapsed: 11.168442ms
Dec 18 13:54:53.609: INFO: Pod "client-containers-1fd378d5-d033-4843-aaf2-5edd2c608647": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026971498s
Dec 18 13:54:55.627: INFO: Pod "client-containers-1fd378d5-d033-4843-aaf2-5edd2c608647": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04506515s
Dec 18 13:54:57.637: INFO: Pod "client-containers-1fd378d5-d033-4843-aaf2-5edd2c608647": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054797536s
Dec 18 13:54:59.644: INFO: Pod "client-containers-1fd378d5-d033-4843-aaf2-5edd2c608647": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062299751s
Dec 18 13:55:01.659: INFO: Pod "client-containers-1fd378d5-d033-4843-aaf2-5edd2c608647": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07678444s
STEP: Saw pod success
Dec 18 13:55:01.659: INFO: Pod "client-containers-1fd378d5-d033-4843-aaf2-5edd2c608647" satisfied condition "success or failure"
Dec 18 13:55:01.663: INFO: Trying to get logs from node iruya-node pod client-containers-1fd378d5-d033-4843-aaf2-5edd2c608647 container test-container: 
STEP: delete the pod
Dec 18 13:55:01.792: INFO: Waiting for pod client-containers-1fd378d5-d033-4843-aaf2-5edd2c608647 to disappear
Dec 18 13:55:01.797: INFO: Pod client-containers-1fd378d5-d033-4843-aaf2-5edd2c608647 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:55:01.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8925" for this suite.
Dec 18 13:55:07.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:55:07.999: INFO: namespace containers-8925 deletion completed in 6.195241636s

• [SLOW TEST:16.553 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:55:08.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 18 13:55:08.114: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4983,SelfLink:/api/v1/namespaces/watch-4983/configmaps/e2e-watch-test-watch-closed,UID:3e5b15aa-2604-4461-83dc-d7b80d7816a2,ResourceVersion:17144731,Generation:0,CreationTimestamp:2019-12-18 13:55:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 18 13:55:08.115: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4983,SelfLink:/api/v1/namespaces/watch-4983/configmaps/e2e-watch-test-watch-closed,UID:3e5b15aa-2604-4461-83dc-d7b80d7816a2,ResourceVersion:17144732,Generation:0,CreationTimestamp:2019-12-18 13:55:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 18 13:55:08.184: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4983,SelfLink:/api/v1/namespaces/watch-4983/configmaps/e2e-watch-test-watch-closed,UID:3e5b15aa-2604-4461-83dc-d7b80d7816a2,ResourceVersion:17144733,Generation:0,CreationTimestamp:2019-12-18 13:55:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 18 13:55:08.184: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4983,SelfLink:/api/v1/namespaces/watch-4983/configmaps/e2e-watch-test-watch-closed,UID:3e5b15aa-2604-4461-83dc-d7b80d7816a2,ResourceVersion:17144734,Generation:0,CreationTimestamp:2019-12-18 13:55:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:55:08.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4983" for this suite.
Dec 18 13:55:14.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:55:14.385: INFO: namespace watch-4983 deletion completed in 6.188973636s

• [SLOW TEST:6.385 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:55:14.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 18 13:55:14.501: INFO: Waiting up to 5m0s for pod "pod-5e75aa76-e324-4220-b868-89ef0598b13b" in namespace "emptydir-3824" to be "success or failure"
Dec 18 13:55:14.506: INFO: Pod "pod-5e75aa76-e324-4220-b868-89ef0598b13b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.757224ms
Dec 18 13:55:16.522: INFO: Pod "pod-5e75aa76-e324-4220-b868-89ef0598b13b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021096068s
Dec 18 13:55:18.539: INFO: Pod "pod-5e75aa76-e324-4220-b868-89ef0598b13b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037981323s
Dec 18 13:55:20.553: INFO: Pod "pod-5e75aa76-e324-4220-b868-89ef0598b13b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052238341s
Dec 18 13:55:22.592: INFO: Pod "pod-5e75aa76-e324-4220-b868-89ef0598b13b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091456975s
STEP: Saw pod success
Dec 18 13:55:22.593: INFO: Pod "pod-5e75aa76-e324-4220-b868-89ef0598b13b" satisfied condition "success or failure"
Dec 18 13:55:22.640: INFO: Trying to get logs from node iruya-node pod pod-5e75aa76-e324-4220-b868-89ef0598b13b container test-container: 
STEP: delete the pod
Dec 18 13:55:22.774: INFO: Waiting for pod pod-5e75aa76-e324-4220-b868-89ef0598b13b to disappear
Dec 18 13:55:22.793: INFO: Pod pod-5e75aa76-e324-4220-b868-89ef0598b13b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:55:22.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3824" for this suite.
Dec 18 13:55:28.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:55:29.033: INFO: namespace emptydir-3824 deletion completed in 6.221756072s

• [SLOW TEST:14.648 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:55:29.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Dec 18 13:55:29.214: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Dec 18 13:55:29.959: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Dec 18 13:55:32.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274129, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274129, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274130, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274129, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:55:34.243: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274129, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274129, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274130, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274129, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:55:36.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274129, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274129, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274130, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274129, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:55:38.245: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274129, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274129, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274130, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274129, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:55:40.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274129, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274129, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274130, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274129, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 13:55:47.037: INFO: Waited 4.794216611s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:55:47.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-9950" for this suite.
Dec 18 13:55:53.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:55:54.188: INFO: namespace aggregator-9950 deletion completed in 6.62579956s

• [SLOW TEST:25.154 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:55:54.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7854.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7854.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7854.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7854.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 18 13:56:06.396: INFO: File wheezy_udp@dns-test-service-3.dns-7854.svc.cluster.local from pod  dns-7854/dns-test-a9a5911d-5161-4eb8-8fd6-68557e8972cb contains '' instead of 'foo.example.com.'
Dec 18 13:56:06.408: INFO: File jessie_udp@dns-test-service-3.dns-7854.svc.cluster.local from pod  dns-7854/dns-test-a9a5911d-5161-4eb8-8fd6-68557e8972cb contains '' instead of 'foo.example.com.'
Dec 18 13:56:06.408: INFO: Lookups using dns-7854/dns-test-a9a5911d-5161-4eb8-8fd6-68557e8972cb failed for: [wheezy_udp@dns-test-service-3.dns-7854.svc.cluster.local jessie_udp@dns-test-service-3.dns-7854.svc.cluster.local]

Dec 18 13:56:11.439: INFO: DNS probes using dns-test-a9a5911d-5161-4eb8-8fd6-68557e8972cb succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7854.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7854.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7854.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7854.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 18 13:56:27.821: INFO: File wheezy_udp@dns-test-service-3.dns-7854.svc.cluster.local from pod  dns-7854/dns-test-f178079b-47fd-40fc-b9d2-e69583697b12 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 18 13:56:27.832: INFO: File jessie_udp@dns-test-service-3.dns-7854.svc.cluster.local from pod  dns-7854/dns-test-f178079b-47fd-40fc-b9d2-e69583697b12 contains '' instead of 'bar.example.com.'
Dec 18 13:56:27.832: INFO: Lookups using dns-7854/dns-test-f178079b-47fd-40fc-b9d2-e69583697b12 failed for: [wheezy_udp@dns-test-service-3.dns-7854.svc.cluster.local jessie_udp@dns-test-service-3.dns-7854.svc.cluster.local]

Dec 18 13:56:32.849: INFO: File wheezy_udp@dns-test-service-3.dns-7854.svc.cluster.local from pod  dns-7854/dns-test-f178079b-47fd-40fc-b9d2-e69583697b12 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 18 13:56:32.878: INFO: File jessie_udp@dns-test-service-3.dns-7854.svc.cluster.local from pod  dns-7854/dns-test-f178079b-47fd-40fc-b9d2-e69583697b12 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 18 13:56:32.878: INFO: Lookups using dns-7854/dns-test-f178079b-47fd-40fc-b9d2-e69583697b12 failed for: [wheezy_udp@dns-test-service-3.dns-7854.svc.cluster.local jessie_udp@dns-test-service-3.dns-7854.svc.cluster.local]

Dec 18 13:56:37.853: INFO: File wheezy_udp@dns-test-service-3.dns-7854.svc.cluster.local from pod  dns-7854/dns-test-f178079b-47fd-40fc-b9d2-e69583697b12 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 18 13:56:37.870: INFO: File jessie_udp@dns-test-service-3.dns-7854.svc.cluster.local from pod  dns-7854/dns-test-f178079b-47fd-40fc-b9d2-e69583697b12 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 18 13:56:37.870: INFO: Lookups using dns-7854/dns-test-f178079b-47fd-40fc-b9d2-e69583697b12 failed for: [wheezy_udp@dns-test-service-3.dns-7854.svc.cluster.local jessie_udp@dns-test-service-3.dns-7854.svc.cluster.local]

Dec 18 13:56:42.851: INFO: DNS probes using dns-test-f178079b-47fd-40fc-b9d2-e69583697b12 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7854.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7854.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7854.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7854.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 18 13:56:59.526: INFO: File jessie_udp@dns-test-service-3.dns-7854.svc.cluster.local from pod  dns-7854/dns-test-7b045457-29be-413d-9813-655d839f072e contains '' instead of '10.108.55.214'
Dec 18 13:56:59.526: INFO: Lookups using dns-7854/dns-test-7b045457-29be-413d-9813-655d839f072e failed for: [jessie_udp@dns-test-service-3.dns-7854.svc.cluster.local]

Dec 18 13:57:04.554: INFO: DNS probes using dns-test-7b045457-29be-413d-9813-655d839f072e succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:57:04.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7854" for this suite.
Dec 18 13:57:12.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:57:12.883: INFO: namespace dns-7854 deletion completed in 8.154192678s

• [SLOW TEST:78.695 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:57:12.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 13:57:13.010: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ecd92fb-5a61-4f3e-93b0-cda59d6118ca" in namespace "downward-api-6327" to be "success or failure"
Dec 18 13:57:13.017: INFO: Pod "downwardapi-volume-3ecd92fb-5a61-4f3e-93b0-cda59d6118ca": Phase="Pending", Reason="", readiness=false. Elapsed: 5.997086ms
Dec 18 13:57:15.025: INFO: Pod "downwardapi-volume-3ecd92fb-5a61-4f3e-93b0-cda59d6118ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013867315s
Dec 18 13:57:17.033: INFO: Pod "downwardapi-volume-3ecd92fb-5a61-4f3e-93b0-cda59d6118ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021914928s
Dec 18 13:57:19.117: INFO: Pod "downwardapi-volume-3ecd92fb-5a61-4f3e-93b0-cda59d6118ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105841834s
Dec 18 13:57:21.123: INFO: Pod "downwardapi-volume-3ecd92fb-5a61-4f3e-93b0-cda59d6118ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112635698s
STEP: Saw pod success
Dec 18 13:57:21.123: INFO: Pod "downwardapi-volume-3ecd92fb-5a61-4f3e-93b0-cda59d6118ca" satisfied condition "success or failure"
Dec 18 13:57:21.129: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3ecd92fb-5a61-4f3e-93b0-cda59d6118ca container client-container: 
STEP: delete the pod
Dec 18 13:57:21.172: INFO: Waiting for pod downwardapi-volume-3ecd92fb-5a61-4f3e-93b0-cda59d6118ca to disappear
Dec 18 13:57:21.179: INFO: Pod downwardapi-volume-3ecd92fb-5a61-4f3e-93b0-cda59d6118ca no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:57:21.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6327" for this suite.
Dec 18 13:57:27.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:57:27.433: INFO: namespace downward-api-6327 deletion completed in 6.202890114s

• [SLOW TEST:14.550 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:57:27.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Dec 18 13:57:27.552: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 13:57:43.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2840" for this suite.
Dec 18 13:57:49.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 13:57:49.185: INFO: namespace pods-2840 deletion completed in 6.138726018s

• [SLOW TEST:21.752 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 13:57:49.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-6dc4bc2f-261e-4408-8a78-c41c08667b34 in namespace container-probe-6053
Dec 18 13:58:01.272: INFO: Started pod busybox-6dc4bc2f-261e-4408-8a78-c41c08667b34 in namespace container-probe-6053
STEP: checking the pod's current state and verifying that restartCount is present
Dec 18 13:58:01.278: INFO: Initial restart count of pod busybox-6dc4bc2f-261e-4408-8a78-c41c08667b34 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:02:02.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6053" for this suite.
Dec 18 14:02:09.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:02:09.111: INFO: namespace container-probe-6053 deletion completed in 6.199981594s

• [SLOW TEST:259.925 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:02:09.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Dec 18 14:02:17.278: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Dec 18 14:02:27.440: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:02:27.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9225" for this suite.
Dec 18 14:02:33.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:02:33.679: INFO: namespace pods-9225 deletion completed in 6.224181011s

• [SLOW TEST:24.568 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:02:33.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-f023becd-e6d0-403c-bebf-4081f9b569c0
STEP: Creating a pod to test consume secrets
Dec 18 14:02:33.837: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1117d531-5174-4db2-a0d9-3dad8077712e" in namespace "projected-5974" to be "success or failure"
Dec 18 14:02:33.851: INFO: Pod "pod-projected-secrets-1117d531-5174-4db2-a0d9-3dad8077712e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.444359ms
Dec 18 14:02:35.867: INFO: Pod "pod-projected-secrets-1117d531-5174-4db2-a0d9-3dad8077712e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029000628s
Dec 18 14:02:37.880: INFO: Pod "pod-projected-secrets-1117d531-5174-4db2-a0d9-3dad8077712e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042174075s
Dec 18 14:02:39.917: INFO: Pod "pod-projected-secrets-1117d531-5174-4db2-a0d9-3dad8077712e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079196209s
Dec 18 14:02:44.482: INFO: Pod "pod-projected-secrets-1117d531-5174-4db2-a0d9-3dad8077712e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.644664202s
Dec 18 14:02:46.505: INFO: Pod "pod-projected-secrets-1117d531-5174-4db2-a0d9-3dad8077712e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.667708499s
Dec 18 14:02:48.522: INFO: Pod "pod-projected-secrets-1117d531-5174-4db2-a0d9-3dad8077712e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.684103925s
STEP: Saw pod success
Dec 18 14:02:48.522: INFO: Pod "pod-projected-secrets-1117d531-5174-4db2-a0d9-3dad8077712e" satisfied condition "success or failure"
Dec 18 14:02:48.536: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1117d531-5174-4db2-a0d9-3dad8077712e container projected-secret-volume-test: 
STEP: delete the pod
Dec 18 14:02:48.702: INFO: Waiting for pod pod-projected-secrets-1117d531-5174-4db2-a0d9-3dad8077712e to disappear
Dec 18 14:02:48.715: INFO: Pod pod-projected-secrets-1117d531-5174-4db2-a0d9-3dad8077712e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:02:48.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5974" for this suite.
Dec 18 14:02:56.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:02:56.885: INFO: namespace projected-5974 deletion completed in 8.164056257s

• [SLOW TEST:23.205 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:02:56.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5130
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 18 14:02:57.032: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 18 14:03:33.421: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5130 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:03:33.421: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:03:34.868: INFO: Found all expected endpoints: [netserver-0]
Dec 18 14:03:34.881: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5130 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:03:34.881: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:03:36.236: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:03:36.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5130" for this suite.
Dec 18 14:04:02.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:04:02.611: INFO: namespace pod-network-test-5130 deletion completed in 26.34801775s

• [SLOW TEST:65.725 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:04:02.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4499.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4499.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 18 14:04:16.828: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4499/dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd: the server could not find the requested resource (get pods dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd)
Dec 18 14:04:16.859: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4499/dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd: the server could not find the requested resource (get pods dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd)
Dec 18 14:04:16.869: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4499/dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd: the server could not find the requested resource (get pods dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd)
Dec 18 14:04:16.875: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4499/dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd: the server could not find the requested resource (get pods dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd)
Dec 18 14:04:16.880: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4499/dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd: the server could not find the requested resource (get pods dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd)
Dec 18 14:04:16.885: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4499/dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd: the server could not find the requested resource (get pods dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd)
Dec 18 14:04:16.892: INFO: Unable to read jessie_udp@PodARecord from pod dns-4499/dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd: the server could not find the requested resource (get pods dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd)
Dec 18 14:04:16.896: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4499/dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd: the server could not find the requested resource (get pods dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd)
Dec 18 14:04:16.896: INFO: Lookups using dns-4499/dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 18 14:04:21.984: INFO: DNS probes using dns-4499/dns-test-69ffa885-5293-4a1f-bbf5-e5cbffca54cd succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:04:22.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4499" for this suite.
Dec 18 14:04:28.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:04:28.392: INFO: namespace dns-4499 deletion completed in 6.15712598s

• [SLOW TEST:25.780 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:04:28.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 18 14:04:28.521: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6226,SelfLink:/api/v1/namespaces/watch-6226/configmaps/e2e-watch-test-configmap-a,UID:12ac1a8f-37f1-49a2-b9c1-1d3bbbff5a9e,ResourceVersion:17145916,Generation:0,CreationTimestamp:2019-12-18 14:04:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 18 14:04:28.522: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6226,SelfLink:/api/v1/namespaces/watch-6226/configmaps/e2e-watch-test-configmap-a,UID:12ac1a8f-37f1-49a2-b9c1-1d3bbbff5a9e,ResourceVersion:17145916,Generation:0,CreationTimestamp:2019-12-18 14:04:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 18 14:04:38.548: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6226,SelfLink:/api/v1/namespaces/watch-6226/configmaps/e2e-watch-test-configmap-a,UID:12ac1a8f-37f1-49a2-b9c1-1d3bbbff5a9e,ResourceVersion:17145930,Generation:0,CreationTimestamp:2019-12-18 14:04:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 18 14:04:38.549: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6226,SelfLink:/api/v1/namespaces/watch-6226/configmaps/e2e-watch-test-configmap-a,UID:12ac1a8f-37f1-49a2-b9c1-1d3bbbff5a9e,ResourceVersion:17145930,Generation:0,CreationTimestamp:2019-12-18 14:04:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 18 14:04:48.578: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6226,SelfLink:/api/v1/namespaces/watch-6226/configmaps/e2e-watch-test-configmap-a,UID:12ac1a8f-37f1-49a2-b9c1-1d3bbbff5a9e,ResourceVersion:17145945,Generation:0,CreationTimestamp:2019-12-18 14:04:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 18 14:04:48.579: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6226,SelfLink:/api/v1/namespaces/watch-6226/configmaps/e2e-watch-test-configmap-a,UID:12ac1a8f-37f1-49a2-b9c1-1d3bbbff5a9e,ResourceVersion:17145945,Generation:0,CreationTimestamp:2019-12-18 14:04:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 18 14:04:58.593: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6226,SelfLink:/api/v1/namespaces/watch-6226/configmaps/e2e-watch-test-configmap-a,UID:12ac1a8f-37f1-49a2-b9c1-1d3bbbff5a9e,ResourceVersion:17145959,Generation:0,CreationTimestamp:2019-12-18 14:04:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 18 14:04:58.594: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6226,SelfLink:/api/v1/namespaces/watch-6226/configmaps/e2e-watch-test-configmap-a,UID:12ac1a8f-37f1-49a2-b9c1-1d3bbbff5a9e,ResourceVersion:17145959,Generation:0,CreationTimestamp:2019-12-18 14:04:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 18 14:05:08.616: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6226,SelfLink:/api/v1/namespaces/watch-6226/configmaps/e2e-watch-test-configmap-b,UID:cbd07df3-6617-4bac-9b83-0550002dd698,ResourceVersion:17145973,Generation:0,CreationTimestamp:2019-12-18 14:05:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 18 14:05:08.616: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6226,SelfLink:/api/v1/namespaces/watch-6226/configmaps/e2e-watch-test-configmap-b,UID:cbd07df3-6617-4bac-9b83-0550002dd698,ResourceVersion:17145973,Generation:0,CreationTimestamp:2019-12-18 14:05:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 18 14:05:18.641: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6226,SelfLink:/api/v1/namespaces/watch-6226/configmaps/e2e-watch-test-configmap-b,UID:cbd07df3-6617-4bac-9b83-0550002dd698,ResourceVersion:17145988,Generation:0,CreationTimestamp:2019-12-18 14:05:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 18 14:05:18.642: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6226,SelfLink:/api/v1/namespaces/watch-6226/configmaps/e2e-watch-test-configmap-b,UID:cbd07df3-6617-4bac-9b83-0550002dd698,ResourceVersion:17145988,Generation:0,CreationTimestamp:2019-12-18 14:05:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:05:28.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6226" for this suite.
Dec 18 14:05:34.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:05:34.882: INFO: namespace watch-6226 deletion completed in 6.216945055s

• [SLOW TEST:66.489 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:05:34.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8256
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 18 14:05:34.977: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 18 14:06:15.702: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-8256 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:06:15.702: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:06:16.223: INFO: Waiting for endpoints: map[]
Dec 18 14:06:16.233: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-8256 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:06:16.233: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:06:16.598: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:06:16.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8256" for this suite.
Dec 18 14:06:36.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:06:36.776: INFO: namespace pod-network-test-8256 deletion completed in 20.16608474s

• [SLOW TEST:61.894 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:06:36.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-6662204b-c4db-4ae0-bb36-63660a4802eb
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-6662204b-c4db-4ae0-bb36-63660a4802eb
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:06:47.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-271" for this suite.
Dec 18 14:07:09.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:07:09.174: INFO: namespace configmap-271 deletion completed in 22.143407759s

• [SLOW TEST:32.397 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:07:09.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 18 14:07:27.521: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 18 14:07:27.529: INFO: Pod pod-with-poststart-http-hook still exists
Dec 18 14:07:29.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 18 14:07:29.983: INFO: Pod pod-with-poststart-http-hook still exists
Dec 18 14:07:31.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 18 14:07:31.537: INFO: Pod pod-with-poststart-http-hook still exists
Dec 18 14:07:33.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 18 14:07:33.536: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:07:33.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6657" for this suite.
Dec 18 14:07:57.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:07:57.711: INFO: namespace container-lifecycle-hook-6657 deletion completed in 24.169073857s

• [SLOW TEST:48.536 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:07:57.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 14:07:57.885: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 18 14:07:57.989: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 18 14:08:02.998: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 18 14:08:07.008: INFO: Creating deployment "test-rolling-update-deployment"
Dec 18 14:08:07.016: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 18 14:08:07.033: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 18 14:08:09.061: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 18 14:08:09.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274887, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274887, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274887, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274887, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 14:08:11.081: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274887, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274887, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274887, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274887, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 14:08:13.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274887, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274887, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274887, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712274887, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 14:08:15.076: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 18 14:08:15.089: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-4236,SelfLink:/apis/apps/v1/namespaces/deployment-4236/deployments/test-rolling-update-deployment,UID:9e8fea18-97f3-47eb-999c-b39f1f15e001,ResourceVersion:17146405,Generation:1,CreationTimestamp:2019-12-18 14:08:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-18 14:08:07 +0000 UTC 2019-12-18 14:08:07 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-18 14:08:14 +0000 UTC 2019-12-18 14:08:07 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 18 14:08:15.095: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-4236,SelfLink:/apis/apps/v1/namespaces/deployment-4236/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:75dfa092-0474-4f4d-aebb-2018841977fc,ResourceVersion:17146396,Generation:1,CreationTimestamp:2019-12-18 14:08:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9e8fea18-97f3-47eb-999c-b39f1f15e001 0xc001e87437 0xc001e87438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 18 14:08:15.095: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 18 14:08:15.095: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-4236,SelfLink:/apis/apps/v1/namespaces/deployment-4236/replicasets/test-rolling-update-controller,UID:b8e6cdbf-fa9e-43bb-8427-c27dc1dcb80e,ResourceVersion:17146404,Generation:2,CreationTimestamp:2019-12-18 14:07:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9e8fea18-97f3-47eb-999c-b39f1f15e001 0xc001e87367 0xc001e87368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 18 14:08:15.100: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-ggv5v" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-ggv5v,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-4236,SelfLink:/api/v1/namespaces/deployment-4236/pods/test-rolling-update-deployment-79f6b9d75c-ggv5v,UID:3c9576a1-0e22-40f5-b5ca-812e28059794,ResourceVersion:17146395,Generation:0,CreationTimestamp:2019-12-18 14:08:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 75dfa092-0474-4f4d-aebb-2018841977fc 0xc001cb8067 0xc001cb8068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5n7f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5n7f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-f5n7f true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cb80e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cb8100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:08:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:08:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:08:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:08:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-18 14:08:07 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-18 14:08:13 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://eab978a25f1029772313a0b9d43103d4f18fe5b9b3b4cf5ae95224b401455771}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:08:15.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4236" for this suite.
Dec 18 14:08:21.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:08:21.256: INFO: namespace deployment-4236 deletion completed in 6.150340796s

• [SLOW TEST:23.546 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:08:21.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Dec 18 14:08:21.477: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix754873852/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:08:21.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6090" for this suite.
Dec 18 14:08:27.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:08:27.756: INFO: namespace kubectl-6090 deletion completed in 6.142114021s

• [SLOW TEST:6.498 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:08:27.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-7e32c3b5-25ff-42df-b785-937b082f411a
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-7e32c3b5-25ff-42df-b785-937b082f411a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:09:41.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6272" for this suite.
Dec 18 14:10:03.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:10:03.838: INFO: namespace projected-6272 deletion completed in 22.201510268s

• [SLOW TEST:96.081 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:10:03.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 18 14:10:03.966: INFO: Waiting up to 5m0s for pod "pod-c697152f-5d97-443e-8561-e8a70595a163" in namespace "emptydir-5033" to be "success or failure"
Dec 18 14:10:03.975: INFO: Pod "pod-c697152f-5d97-443e-8561-e8a70595a163": Phase="Pending", Reason="", readiness=false. Elapsed: 8.903377ms
Dec 18 14:10:05.984: INFO: Pod "pod-c697152f-5d97-443e-8561-e8a70595a163": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017986897s
Dec 18 14:10:08.704: INFO: Pod "pod-c697152f-5d97-443e-8561-e8a70595a163": Phase="Pending", Reason="", readiness=false. Elapsed: 4.737665619s
Dec 18 14:10:10.714: INFO: Pod "pod-c697152f-5d97-443e-8561-e8a70595a163": Phase="Pending", Reason="", readiness=false. Elapsed: 6.747491044s
Dec 18 14:10:12.726: INFO: Pod "pod-c697152f-5d97-443e-8561-e8a70595a163": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.759984285s
STEP: Saw pod success
Dec 18 14:10:12.726: INFO: Pod "pod-c697152f-5d97-443e-8561-e8a70595a163" satisfied condition "success or failure"
Dec 18 14:10:12.757: INFO: Trying to get logs from node iruya-node pod pod-c697152f-5d97-443e-8561-e8a70595a163 container test-container: 
STEP: delete the pod
Dec 18 14:10:12.847: INFO: Waiting for pod pod-c697152f-5d97-443e-8561-e8a70595a163 to disappear
Dec 18 14:10:12.947: INFO: Pod pod-c697152f-5d97-443e-8561-e8a70595a163 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:10:12.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5033" for this suite.
Dec 18 14:10:18.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:10:19.100: INFO: namespace emptydir-5033 deletion completed in 6.132802955s

• [SLOW TEST:15.262 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:10:19.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 14:10:19.325: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34394f96-9b66-4fcb-9929-4b099d959d68" in namespace "projected-9906" to be "success or failure"
Dec 18 14:10:19.358: INFO: Pod "downwardapi-volume-34394f96-9b66-4fcb-9929-4b099d959d68": Phase="Pending", Reason="", readiness=false. Elapsed: 32.643631ms
Dec 18 14:10:21.376: INFO: Pod "downwardapi-volume-34394f96-9b66-4fcb-9929-4b099d959d68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050553315s
Dec 18 14:10:23.389: INFO: Pod "downwardapi-volume-34394f96-9b66-4fcb-9929-4b099d959d68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063782975s
Dec 18 14:10:25.399: INFO: Pod "downwardapi-volume-34394f96-9b66-4fcb-9929-4b099d959d68": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073446193s
Dec 18 14:10:27.429: INFO: Pod "downwardapi-volume-34394f96-9b66-4fcb-9929-4b099d959d68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104155747s
STEP: Saw pod success
Dec 18 14:10:27.429: INFO: Pod "downwardapi-volume-34394f96-9b66-4fcb-9929-4b099d959d68" satisfied condition "success or failure"
Dec 18 14:10:27.435: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-34394f96-9b66-4fcb-9929-4b099d959d68 container client-container: 
STEP: delete the pod
Dec 18 14:10:27.495: INFO: Waiting for pod downwardapi-volume-34394f96-9b66-4fcb-9929-4b099d959d68 to disappear
Dec 18 14:10:27.501: INFO: Pod downwardapi-volume-34394f96-9b66-4fcb-9929-4b099d959d68 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:10:27.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9906" for this suite.
Dec 18 14:10:33.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:10:33.845: INFO: namespace projected-9906 deletion completed in 6.334533558s

• [SLOW TEST:14.745 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:10:33.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6852
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-6852
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6852
Dec 18 14:10:34.131: INFO: Found 0 stateful pods, waiting for 1
Dec 18 14:10:44.140: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 18 14:10:44.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 14:10:46.918: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 18 14:10:46.918: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 14:10:46.918: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 14:10:46.930: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 18 14:10:56.946: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 14:10:56.946: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 14:10:56.987: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 18 14:10:56.987: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  }]
Dec 18 14:10:56.987: INFO: ss-1              Pending         []
Dec 18 14:10:56.987: INFO: 
Dec 18 14:10:56.987: INFO: StatefulSet ss has not reached scale 3, at 2
Dec 18 14:10:58.917: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982459531s
Dec 18 14:11:00.111: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.05162418s
Dec 18 14:11:01.133: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.857313947s
Dec 18 14:11:02.161: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.836115672s
Dec 18 14:11:03.961: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.807901538s
Dec 18 14:11:05.639: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.007816784s
Dec 18 14:11:06.650: INFO: Verifying statefulset ss doesn't scale past 3 for another 330.307014ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6852
Dec 18 14:11:07.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:11:08.575: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 18 14:11:08.575: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 14:11:08.575: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 14:11:08.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:11:08.888: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 18 14:11:08.889: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 14:11:08.889: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 14:11:08.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:11:09.277: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 18 14:11:09.277: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 18 14:11:09.277: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 18 14:11:09.289: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 14:11:09.290: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 14:11:09.290: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 18 14:11:09.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 14:11:09.770: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 18 14:11:09.771: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 14:11:09.771: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 14:11:09.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 14:11:10.281: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 18 14:11:10.281: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 14:11:10.281: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 14:11:10.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 18 14:11:11.039: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 18 14:11:11.039: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 18 14:11:11.039: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 18 14:11:11.039: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 14:11:11.052: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 18 14:11:21.064: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 14:11:21.064: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 14:11:21.064: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 18 14:11:21.132: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 18 14:11:21.133: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  }]
Dec 18 14:11:21.133: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:56 +0000 UTC  }]
Dec 18 14:11:21.133: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  }]
Dec 18 14:11:21.133: INFO: 
Dec 18 14:11:21.133: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 18 14:11:23.260: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 18 14:11:23.261: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  }]
Dec 18 14:11:23.261: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:56 +0000 UTC  }]
Dec 18 14:11:23.261: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  }]
Dec 18 14:11:23.261: INFO: 
Dec 18 14:11:23.261: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 18 14:11:24.271: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 18 14:11:24.271: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  }]
Dec 18 14:11:24.271: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:56 +0000 UTC  }]
Dec 18 14:11:24.271: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  }]
Dec 18 14:11:24.271: INFO: 
Dec 18 14:11:24.271: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 18 14:11:25.555: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 18 14:11:25.555: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  }]
Dec 18 14:11:25.555: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:56 +0000 UTC  }]
Dec 18 14:11:25.555: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  }]
Dec 18 14:11:25.555: INFO: 
Dec 18 14:11:25.555: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 18 14:11:26.576: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 18 14:11:26.576: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  }]
Dec 18 14:11:26.576: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:56 +0000 UTC  }]
Dec 18 14:11:26.576: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  }]
Dec 18 14:11:26.576: INFO: 
Dec 18 14:11:26.576: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 18 14:11:27.617: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 18 14:11:27.617: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  }]
Dec 18 14:11:27.617: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:56 +0000 UTC  }]
Dec 18 14:11:27.617: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  }]
Dec 18 14:11:27.617: INFO: 
Dec 18 14:11:27.617: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 18 14:11:28.667: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 18 14:11:28.667: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  }]
Dec 18 14:11:28.667: INFO: ss-2  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  }]
Dec 18 14:11:28.667: INFO: 
Dec 18 14:11:28.667: INFO: StatefulSet ss has not reached scale 0, at 2
Dec 18 14:11:29.684: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 18 14:11:29.684: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  }]
Dec 18 14:11:29.684: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  }]
Dec 18 14:11:29.684: INFO: 
Dec 18 14:11:29.684: INFO: StatefulSet ss has not reached scale 0, at 2
Dec 18 14:11:30.706: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 18 14:11:30.707: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:34 +0000 UTC  }]
Dec 18 14:11:30.707: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:11:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:10:57 +0000 UTC  }]
Dec 18 14:11:30.708: INFO: 
Dec 18 14:11:30.708: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6852
Dec 18 14:11:31.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:11:32.009: INFO: rc: 1
Dec 18 14:11:32.009: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00286b920 exit status 1   true [0xc0026bc0a0 0xc0026bc0b8 0xc0026bc0d0] [0xc0026bc0a0 0xc0026bc0b8 0xc0026bc0d0] [0xc0026bc0b0 0xc0026bc0c8] [0xba6c50 0xba6c50] 0xc0019b8120 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Dec 18 14:11:42.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:11:42.154: INFO: rc: 1
Dec 18 14:11:42.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00286b9e0 exit status 1   true [0xc0026bc0d8 0xc0026bc0f0 0xc0026bc108] [0xc0026bc0d8 0xc0026bc0f0 0xc0026bc108] [0xc0026bc0e8 0xc0026bc100] [0xba6c50 0xba6c50] 0xc001d17980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:11:52.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:11:52.347: INFO: rc: 1
Dec 18 14:11:52.347: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ef1110 exit status 1   true [0xc000731130 0xc0007312a8 0xc000731380] [0xc000731130 0xc0007312a8 0xc000731380] [0xc000731258 0xc000731358] [0xba6c50 0xba6c50] 0xc002842c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:12:02.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:12:02.514: INFO: rc: 1
Dec 18 14:12:02.515: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00286bb00 exit status 1   true [0xc0026bc110 0xc0026bc128 0xc0026bc140] [0xc0026bc110 0xc0026bc128 0xc0026bc140] [0xc0026bc120 0xc0026bc138] [0xba6c50 0xba6c50] 0xc0011df380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:12:12.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:12:12.666: INFO: rc: 1
Dec 18 14:12:12.667: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00286bbc0 exit status 1   true [0xc0026bc148 0xc0026bc160 0xc0026bc178] [0xc0026bc148 0xc0026bc160 0xc0026bc178] [0xc0026bc158 0xc0026bc170] [0xba6c50 0xba6c50] 0xc0012b8cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:12:22.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:12:22.787: INFO: rc: 1
Dec 18 14:12:22.787: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00286bc80 exit status 1   true [0xc0026bc180 0xc0026bc198 0xc0026bc1b0] [0xc0026bc180 0xc0026bc198 0xc0026bc1b0] [0xc0026bc190 0xc0026bc1a8] [0xba6c50 0xba6c50] 0xc00191c360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:12:32.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:12:32.918: INFO: rc: 1
Dec 18 14:12:32.918: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ef1260 exit status 1   true [0xc0007313c0 0xc000731560 0xc0007315c0] [0xc0007313c0 0xc000731560 0xc0007315c0] [0xc000731508 0xc0007315b0] [0xba6c50 0xba6c50] 0xc00188f0e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:12:42.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:12:43.094: INFO: rc: 1
Dec 18 14:12:43.095: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00286bd40 exit status 1   true [0xc0026bc1b8 0xc0026bc1d0 0xc0026bc1e8] [0xc0026bc1b8 0xc0026bc1d0 0xc0026bc1e8] [0xc0026bc1c8 0xc0026bc1e0] [0xba6c50 0xba6c50] 0xc001af5320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:12:53.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:12:53.307: INFO: rc: 1
Dec 18 14:12:53.308: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e28c00 exit status 1   true [0xc001a50050 0xc001a50068 0xc001a50080] [0xc001a50050 0xc001a50068 0xc001a50080] [0xc001a50060 0xc001a50078] [0xba6c50 0xba6c50] 0xc001d697a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:13:03.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:13:03.692: INFO: rc: 1
Dec 18 14:13:03.692: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002ef1380 exit status 1   true [0xc0007315d0 0xc000731680 0xc000731700] [0xc0007315d0 0xc000731680 0xc000731700] [0xc000731660 0xc0007316e0] [0xba6c50 0xba6c50] 0xc001f0f4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:13:13.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:13:13.886: INFO: rc: 1
Dec 18 14:13:13.887: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002704090 exit status 1   true [0xc000d2a2d8 0xc000d2af38 0xc000d2b480] [0xc000d2a2d8 0xc000d2af38 0xc000d2b480] [0xc000d2a478 0xc000d2b3b8] [0xba6c50 0xba6c50] 0xc001e88780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:13:23.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:13:24.066: INFO: rc: 1
Dec 18 14:13:24.067: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e08090 exit status 1   true [0xc001a50000 0xc001a50018 0xc001a50030] [0xc001a50000 0xc001a50018 0xc001a50030] [0xc001a50010 0xc001a50028] [0xba6c50 0xba6c50] 0xc001f0f4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:13:34.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:13:34.179: INFO: rc: 1
Dec 18 14:13:34.180: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e08180 exit status 1   true [0xc001a50038 0xc001a50050 0xc001a50068] [0xc001a50038 0xc001a50050 0xc001a50068] [0xc001a50048 0xc001a50060] [0xba6c50 0xba6c50] 0xc0017637a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:13:44.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:13:44.332: INFO: rc: 1
Dec 18 14:13:44.333: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0006f9a10 exit status 1   true [0xc0026bc000 0xc0026bc018 0xc0026bc030] [0xc0026bc000 0xc0026bc018 0xc0026bc030] [0xc0026bc010 0xc0026bc028] [0xba6c50 0xba6c50] 0xc00187dc80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:13:54.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:13:54.592: INFO: rc: 1
Dec 18 14:13:54.593: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e08270 exit status 1   true [0xc001a50070 0xc001a50088 0xc001a500a0] [0xc001a50070 0xc001a50088 0xc001a500a0] [0xc001a50080 0xc001a50098] [0xba6c50 0xba6c50] 0xc000cdcea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:14:04.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:14:04.773: INFO: rc: 1
Dec 18 14:14:04.774: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0006f9ad0 exit status 1   true [0xc0026bc038 0xc0026bc050 0xc0026bc068] [0xc0026bc038 0xc0026bc050 0xc0026bc068] [0xc0026bc048 0xc0026bc060] [0xba6c50 0xba6c50] 0xc00197bb60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:14:14.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:14:14.978: INFO: rc: 1
Dec 18 14:14:14.978: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e28030 exit status 1   true [0xc0007301a0 0xc0007305a0 0xc0007307e0] [0xc0007301a0 0xc0007305a0 0xc0007307e0] [0xc0007303a0 0xc000730728] [0xba6c50 0xba6c50] 0xc001d16ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:14:24.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:14:25.139: INFO: rc: 1
Dec 18 14:14:25.140: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0027041b0 exit status 1   true [0xc000d2b540 0xc000d2b910 0xc000d2bbe0] [0xc000d2b540 0xc000d2b910 0xc000d2bbe0] [0xc000d2b848 0xc000d2b9b8] [0xba6c50 0xba6c50] 0xc001e89020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:14:35.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:14:35.374: INFO: rc: 1
Dec 18 14:14:35.375: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e28150 exit status 1   true [0xc0007308a0 0xc000730e38 0xc0007310b8] [0xc0007308a0 0xc000730e38 0xc0007310b8] [0xc000730df8 0xc000731030] [0xba6c50 0xba6c50] 0xc001846420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:14:45.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:14:45.563: INFO: rc: 1
Dec 18 14:14:45.563: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e08360 exit status 1   true [0xc001a500a8 0xc001a500c0 0xc001a500d8] [0xc001a500a8 0xc001a500c0 0xc001a500d8] [0xc001a500b8 0xc001a500d0] [0xba6c50 0xba6c50] 0xc001eeef00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:14:55.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:14:55.706: INFO: rc: 1
Dec 18 14:14:55.706: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0027042a0 exit status 1   true [0xc000d2bc38 0xc000d2bda0 0xc000d2bf78] [0xc000d2bc38 0xc000d2bda0 0xc000d2bf78] [0xc000d2bd08 0xc000d2bf58] [0xba6c50 0xba6c50] 0xc001e89740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:15:05.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:15:05.938: INFO: rc: 1
Dec 18 14:15:05.939: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e08420 exit status 1   true [0xc001a500e0 0xc001a500f8 0xc001a50110] [0xc001a500e0 0xc001a500f8 0xc001a50110] [0xc001a500f0 0xc001a50108] [0xba6c50 0xba6c50] 0xc001adcae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:15:15.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:15:16.189: INFO: rc: 1
Dec 18 14:15:16.190: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e08000 exit status 1   true [0xc001a50008 0xc001a50020 0xc001a50038] [0xc001a50008 0xc001a50020 0xc001a50038] [0xc001a50018 0xc001a50030] [0xba6c50 0xba6c50] 0xc001eef320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:15:26.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:15:26.298: INFO: rc: 1
Dec 18 14:15:26.298: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e280c0 exit status 1   true [0xc0026bc000 0xc0026bc018 0xc0026bc030] [0xc0026bc000 0xc0026bc018 0xc0026bc030] [0xc0026bc010 0xc0026bc028] [0xba6c50 0xba6c50] 0xc001d168a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:15:36.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:15:36.496: INFO: rc: 1
Dec 18 14:15:36.497: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e281b0 exit status 1   true [0xc0026bc038 0xc0026bc050 0xc0026bc068] [0xc0026bc038 0xc0026bc050 0xc0026bc068] [0xc0026bc048 0xc0026bc060] [0xba6c50 0xba6c50] 0xc0017adb00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:15:46.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:15:46.743: INFO: rc: 1
Dec 18 14:15:46.744: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0006f9a70 exit status 1   true [0xc0007301a0 0xc0007305a0 0xc0007307e0] [0xc0007301a0 0xc0007305a0 0xc0007307e0] [0xc0007303a0 0xc000730728] [0xba6c50 0xba6c50] 0xc0013b8c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:15:56.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:15:56.875: INFO: rc: 1
Dec 18 14:15:56.875: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0006f9b60 exit status 1   true [0xc0007308a0 0xc000730e38 0xc0007310b8] [0xc0007308a0 0xc000730e38 0xc0007310b8] [0xc000730df8 0xc000731030] [0xba6c50 0xba6c50] 0xc0015165a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:16:06.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:16:06.990: INFO: rc: 1
Dec 18 14:16:06.990: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0027040c0 exit status 1   true [0xc000d2a040 0xc000d2a478 0xc000d2b3b8] [0xc000d2a040 0xc000d2a478 0xc000d2b3b8] [0xc000d2a368 0xc000d2aff8] [0xba6c50 0xba6c50] 0xc00191d4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:16:16.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:16:17.144: INFO: rc: 1
Dec 18 14:16:17.145: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e28270 exit status 1   true [0xc0026bc070 0xc0026bc088 0xc0026bc0a0] [0xc0026bc070 0xc0026bc088 0xc0026bc0a0] [0xc0026bc080 0xc0026bc098] [0xba6c50 0xba6c50] 0xc001f0e000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:16:27.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:16:27.319: INFO: rc: 1
Dec 18 14:16:27.319: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002e28360 exit status 1   true [0xc0026bc0a8 0xc0026bc0c0 0xc0026bc0d8] [0xc0026bc0a8 0xc0026bc0c0 0xc0026bc0d8] [0xc0026bc0b8 0xc0026bc0d0] [0xba6c50 0xba6c50] 0xc001adc5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 18 14:16:37.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6852 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 18 14:16:37.517: INFO: rc: 1
Dec 18 14:16:37.517: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Dec 18 14:16:37.517: INFO: Scaling statefulset ss to 0
Dec 18 14:16:37.547: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 18 14:16:37.550: INFO: Deleting all statefulset in ns statefulset-6852
Dec 18 14:16:37.553: INFO: Scaling statefulset ss to 0
Dec 18 14:16:37.564: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 14:16:37.567: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:16:37.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6852" for this suite.
Dec 18 14:16:43.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:16:43.866: INFO: namespace statefulset-6852 deletion completed in 6.24480394s

• [SLOW TEST:370.020 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:16:43.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 14:16:43.982: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 18 14:16:44.190: INFO: Number of nodes with available pods: 0
Dec 18 14:16:44.191: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:16:45.564: INFO: Number of nodes with available pods: 0
Dec 18 14:16:45.564: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:16:46.793: INFO: Number of nodes with available pods: 0
Dec 18 14:16:46.793: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:16:47.203: INFO: Number of nodes with available pods: 0
Dec 18 14:16:47.203: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:16:48.304: INFO: Number of nodes with available pods: 0
Dec 18 14:16:48.304: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:16:51.035: INFO: Number of nodes with available pods: 0
Dec 18 14:16:51.035: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:16:51.469: INFO: Number of nodes with available pods: 0
Dec 18 14:16:51.469: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:16:52.479: INFO: Number of nodes with available pods: 0
Dec 18 14:16:52.479: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:16:53.204: INFO: Number of nodes with available pods: 0
Dec 18 14:16:53.204: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:16:54.206: INFO: Number of nodes with available pods: 2
Dec 18 14:16:54.206: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 18 14:16:54.388: INFO: Wrong image for pod: daemon-set-h9q9n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:16:54.388: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:16:55.413: INFO: Wrong image for pod: daemon-set-h9q9n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:16:55.413: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:16:56.417: INFO: Wrong image for pod: daemon-set-h9q9n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:16:56.417: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:16:57.419: INFO: Wrong image for pod: daemon-set-h9q9n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:16:57.419: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:16:58.413: INFO: Wrong image for pod: daemon-set-h9q9n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:16:58.413: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:16:59.435: INFO: Wrong image for pod: daemon-set-h9q9n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:16:59.435: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:00.413: INFO: Wrong image for pod: daemon-set-h9q9n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:00.413: INFO: Pod daemon-set-h9q9n is not available
Dec 18 14:17:00.413: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:01.409: INFO: Wrong image for pod: daemon-set-h9q9n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:01.409: INFO: Pod daemon-set-h9q9n is not available
Dec 18 14:17:01.409: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:02.411: INFO: Wrong image for pod: daemon-set-h9q9n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:02.411: INFO: Pod daemon-set-h9q9n is not available
Dec 18 14:17:02.411: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:03.413: INFO: Wrong image for pod: daemon-set-h9q9n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:03.413: INFO: Pod daemon-set-h9q9n is not available
Dec 18 14:17:03.413: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:04.416: INFO: Wrong image for pod: daemon-set-h9q9n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:04.416: INFO: Pod daemon-set-h9q9n is not available
Dec 18 14:17:04.416: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:05.416: INFO: Wrong image for pod: daemon-set-h9q9n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:05.416: INFO: Pod daemon-set-h9q9n is not available
Dec 18 14:17:05.416: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:06.420: INFO: Wrong image for pod: daemon-set-h9q9n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:06.420: INFO: Pod daemon-set-h9q9n is not available
Dec 18 14:17:06.420: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:07.410: INFO: Wrong image for pod: daemon-set-h9q9n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:07.410: INFO: Pod daemon-set-h9q9n is not available
Dec 18 14:17:07.410: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:08.411: INFO: Pod daemon-set-bz9wc is not available
Dec 18 14:17:08.411: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:09.727: INFO: Pod daemon-set-bz9wc is not available
Dec 18 14:17:09.727: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:10.450: INFO: Pod daemon-set-bz9wc is not available
Dec 18 14:17:10.450: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:11.413: INFO: Pod daemon-set-bz9wc is not available
Dec 18 14:17:11.413: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:12.417: INFO: Pod daemon-set-bz9wc is not available
Dec 18 14:17:12.417: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:14.048: INFO: Pod daemon-set-bz9wc is not available
Dec 18 14:17:14.048: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:14.488: INFO: Pod daemon-set-bz9wc is not available
Dec 18 14:17:14.488: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:15.413: INFO: Pod daemon-set-bz9wc is not available
Dec 18 14:17:15.413: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:16.416: INFO: Pod daemon-set-bz9wc is not available
Dec 18 14:17:16.416: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:17.420: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:18.410: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:19.428: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:20.416: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:21.416: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:21.416: INFO: Pod daemon-set-rq5tk is not available
Dec 18 14:17:22.436: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:22.436: INFO: Pod daemon-set-rq5tk is not available
Dec 18 14:17:23.420: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:23.420: INFO: Pod daemon-set-rq5tk is not available
Dec 18 14:17:24.417: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:24.417: INFO: Pod daemon-set-rq5tk is not available
Dec 18 14:17:25.419: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:25.419: INFO: Pod daemon-set-rq5tk is not available
Dec 18 14:17:26.414: INFO: Wrong image for pod: daemon-set-rq5tk. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 18 14:17:26.415: INFO: Pod daemon-set-rq5tk is not available
Dec 18 14:17:27.413: INFO: Pod daemon-set-gr6zq is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 18 14:17:27.429: INFO: Number of nodes with available pods: 1
Dec 18 14:17:27.429: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:17:28.452: INFO: Number of nodes with available pods: 1
Dec 18 14:17:28.452: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:17:29.444: INFO: Number of nodes with available pods: 1
Dec 18 14:17:29.445: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:17:30.446: INFO: Number of nodes with available pods: 1
Dec 18 14:17:30.446: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:17:31.452: INFO: Number of nodes with available pods: 1
Dec 18 14:17:31.452: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:17:32.484: INFO: Number of nodes with available pods: 1
Dec 18 14:17:32.484: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:17:33.442: INFO: Number of nodes with available pods: 1
Dec 18 14:17:33.442: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:17:34.458: INFO: Number of nodes with available pods: 1
Dec 18 14:17:34.459: INFO: Node iruya-node is running more than one daemon pod
Dec 18 14:17:35.451: INFO: Number of nodes with available pods: 2
Dec 18 14:17:35.451: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-851, will wait for the garbage collector to delete the pods
Dec 18 14:17:35.549: INFO: Deleting DaemonSet.extensions daemon-set took: 14.313424ms
Dec 18 14:17:35.850: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.021358ms
Dec 18 14:17:47.870: INFO: Number of nodes with available pods: 0
Dec 18 14:17:47.870: INFO: Number of running nodes: 0, number of available pods: 0
Dec 18 14:17:47.880: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-851/daemonsets","resourceVersion":"17147524"},"items":null}

Dec 18 14:17:47.888: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-851/pods","resourceVersion":"17147524"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:17:47.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-851" for this suite.
Dec 18 14:17:53.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:17:54.116: INFO: namespace daemonsets-851 deletion completed in 6.158445738s

• [SLOW TEST:70.247 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:17:54.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Dec 18 14:17:54.332: INFO: Waiting up to 5m0s for pod "var-expansion-f1e9e223-2027-4d5c-a653-c007b0d2b865" in namespace "var-expansion-7676" to be "success or failure"
Dec 18 14:17:54.347: INFO: Pod "var-expansion-f1e9e223-2027-4d5c-a653-c007b0d2b865": Phase="Pending", Reason="", readiness=false. Elapsed: 14.797614ms
Dec 18 14:17:56.359: INFO: Pod "var-expansion-f1e9e223-2027-4d5c-a653-c007b0d2b865": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026644366s
Dec 18 14:17:58.383: INFO: Pod "var-expansion-f1e9e223-2027-4d5c-a653-c007b0d2b865": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050419829s
Dec 18 14:18:00.400: INFO: Pod "var-expansion-f1e9e223-2027-4d5c-a653-c007b0d2b865": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067851819s
Dec 18 14:18:02.411: INFO: Pod "var-expansion-f1e9e223-2027-4d5c-a653-c007b0d2b865": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0782621s
Dec 18 14:18:04.422: INFO: Pod "var-expansion-f1e9e223-2027-4d5c-a653-c007b0d2b865": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089511103s
STEP: Saw pod success
Dec 18 14:18:04.422: INFO: Pod "var-expansion-f1e9e223-2027-4d5c-a653-c007b0d2b865" satisfied condition "success or failure"
Dec 18 14:18:04.428: INFO: Trying to get logs from node iruya-node pod var-expansion-f1e9e223-2027-4d5c-a653-c007b0d2b865 container dapi-container: 
STEP: delete the pod
Dec 18 14:18:04.518: INFO: Waiting for pod var-expansion-f1e9e223-2027-4d5c-a653-c007b0d2b865 to disappear
Dec 18 14:18:04.530: INFO: Pod var-expansion-f1e9e223-2027-4d5c-a653-c007b0d2b865 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:18:04.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7676" for this suite.
Dec 18 14:18:12.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:18:12.732: INFO: namespace var-expansion-7676 deletion completed in 8.156391805s

• [SLOW TEST:18.615 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:18:12.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 14:18:12.868: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 18 14:18:17.881: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 18 14:18:19.903: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 18 14:18:21.912: INFO: Creating deployment "test-rollover-deployment"
Dec 18 14:18:21.923: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 18 14:18:23.942: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 18 14:18:23.954: INFO: Ensure that both replica sets have 1 created replica
Dec 18 14:18:23.960: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 18 14:18:23.969: INFO: Updating deployment test-rollover-deployment
Dec 18 14:18:23.970: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 18 14:18:26.005: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 18 14:18:26.014: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 18 14:18:26.020: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 14:18:26.020: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275504, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275501, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 14:18:28.039: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 14:18:28.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275504, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275501, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 14:18:30.039: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 14:18:30.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275504, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275501, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 14:18:32.032: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 14:18:32.032: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275504, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275501, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 14:18:34.037: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 14:18:34.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275512, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275501, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 14:18:36.038: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 14:18:36.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275512, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275501, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 14:18:38.032: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 14:18:38.032: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275512, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275501, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 14:18:40.033: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 14:18:40.033: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275512, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275501, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 14:18:43.339: INFO: all replica sets need to contain the pod-template-hash label
Dec 18 14:18:43.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275502, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275512, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712275501, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 18 14:18:44.072: INFO: 
Dec 18 14:18:44.073: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 18 14:18:44.093: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-9636,SelfLink:/apis/apps/v1/namespaces/deployment-9636/deployments/test-rollover-deployment,UID:7b94bad0-357d-44f9-8220-2adc31de2c86,ResourceVersion:17147727,Generation:2,CreationTimestamp:2019-12-18 14:18:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-18 14:18:22 +0000 UTC 2019-12-18 14:18:22 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-18 14:18:43 +0000 UTC 2019-12-18 14:18:21 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 18 14:18:44.097: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-9636,SelfLink:/apis/apps/v1/namespaces/deployment-9636/replicasets/test-rollover-deployment-854595fc44,UID:2971bf57-b6b2-4743-926b-5a7627559d41,ResourceVersion:17147712,Generation:2,CreationTimestamp:2019-12-18 14:18:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7b94bad0-357d-44f9-8220-2adc31de2c86 0xc001e48477 0xc001e48478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 18 14:18:44.097: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 18 14:18:44.097: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-9636,SelfLink:/apis/apps/v1/namespaces/deployment-9636/replicasets/test-rollover-controller,UID:8a485530-573b-457e-a056-a29fc7f2175d,ResourceVersion:17147726,Generation:2,CreationTimestamp:2019-12-18 14:18:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7b94bad0-357d-44f9-8220-2adc31de2c86 0xc001e4838f 0xc001e483a0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 18 14:18:44.097: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-9636,SelfLink:/apis/apps/v1/namespaces/deployment-9636/replicasets/test-rollover-deployment-9b8b997cf,UID:efeee80c-f3f1-45e0-a0bd-72b9a7659417,ResourceVersion:17147675,Generation:2,CreationTimestamp:2019-12-18 14:18:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7b94bad0-357d-44f9-8220-2adc31de2c86 0xc001e48540 0xc001e48541}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 18 14:18:44.101: INFO: Pod "test-rollover-deployment-854595fc44-82qdr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-82qdr,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-9636,SelfLink:/api/v1/namespaces/deployment-9636/pods/test-rollover-deployment-854595fc44-82qdr,UID:89af782c-9552-4256-8a10-eb8a1449e27a,ResourceVersion:17147698,Generation:0,CreationTimestamp:2019-12-18 14:18:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 2971bf57-b6b2-4743-926b-5a7627559d41 0xc001e49137 0xc001e49138}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dr57f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dr57f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-dr57f true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e491b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e491d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:18:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:18:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:18:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-18 14:18:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2019-12-18 14:18:24 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-18 14:18:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://6ddb1893a24256507a3310c47b2868d4d167e0018d06a47b382d485c0d8afd77}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:18:44.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9636" for this suite.
Dec 18 14:18:52.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:18:52.264: INFO: namespace deployment-9636 deletion completed in 8.15835321s

• [SLOW TEST:39.531 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:18:52.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 14:18:52.451: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6bb1468b-3ce3-481f-85db-e27b19eec3eb" in namespace "downward-api-1247" to be "success or failure"
Dec 18 14:18:52.462: INFO: Pod "downwardapi-volume-6bb1468b-3ce3-481f-85db-e27b19eec3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.880954ms
Dec 18 14:18:54.481: INFO: Pod "downwardapi-volume-6bb1468b-3ce3-481f-85db-e27b19eec3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030331941s
Dec 18 14:18:56.499: INFO: Pod "downwardapi-volume-6bb1468b-3ce3-481f-85db-e27b19eec3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047550036s
Dec 18 14:18:58.513: INFO: Pod "downwardapi-volume-6bb1468b-3ce3-481f-85db-e27b19eec3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061542484s
Dec 18 14:19:00.527: INFO: Pod "downwardapi-volume-6bb1468b-3ce3-481f-85db-e27b19eec3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076048561s
Dec 18 14:19:02.545: INFO: Pod "downwardapi-volume-6bb1468b-3ce3-481f-85db-e27b19eec3eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094275023s
STEP: Saw pod success
Dec 18 14:19:02.546: INFO: Pod "downwardapi-volume-6bb1468b-3ce3-481f-85db-e27b19eec3eb" satisfied condition "success or failure"
Dec 18 14:19:02.553: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6bb1468b-3ce3-481f-85db-e27b19eec3eb container client-container: 
STEP: delete the pod
Dec 18 14:19:02.636: INFO: Waiting for pod downwardapi-volume-6bb1468b-3ce3-481f-85db-e27b19eec3eb to disappear
Dec 18 14:19:02.640: INFO: Pod downwardapi-volume-6bb1468b-3ce3-481f-85db-e27b19eec3eb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:19:02.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1247" for this suite.
Dec 18 14:19:08.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:19:08.739: INFO: namespace downward-api-1247 deletion completed in 6.093133319s

• [SLOW TEST:16.475 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:19:08.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-a470fe7a-2fbb-440c-a9b8-f04b7a971409
STEP: Creating secret with name secret-projected-all-test-volume-1cf1703f-7be1-49cd-9345-de29dc56fd3c
STEP: Creating a pod to test Check all projections for projected volume plugin
Dec 18 14:19:08.896: INFO: Waiting up to 5m0s for pod "projected-volume-6447380d-968b-415e-aa9a-7572643821b7" in namespace "projected-7465" to be "success or failure"
Dec 18 14:19:08.984: INFO: Pod "projected-volume-6447380d-968b-415e-aa9a-7572643821b7": Phase="Pending", Reason="", readiness=false. Elapsed: 87.203899ms
Dec 18 14:19:11.024: INFO: Pod "projected-volume-6447380d-968b-415e-aa9a-7572643821b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127512436s
Dec 18 14:19:13.031: INFO: Pod "projected-volume-6447380d-968b-415e-aa9a-7572643821b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134206519s
Dec 18 14:19:15.040: INFO: Pod "projected-volume-6447380d-968b-415e-aa9a-7572643821b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143347567s
Dec 18 14:19:17.048: INFO: Pod "projected-volume-6447380d-968b-415e-aa9a-7572643821b7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151543446s
Dec 18 14:19:19.056: INFO: Pod "projected-volume-6447380d-968b-415e-aa9a-7572643821b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.159183005s
STEP: Saw pod success
Dec 18 14:19:19.056: INFO: Pod "projected-volume-6447380d-968b-415e-aa9a-7572643821b7" satisfied condition "success or failure"
Dec 18 14:19:19.060: INFO: Trying to get logs from node iruya-node pod projected-volume-6447380d-968b-415e-aa9a-7572643821b7 container projected-all-volume-test: 
STEP: delete the pod
Dec 18 14:19:19.134: INFO: Waiting for pod projected-volume-6447380d-968b-415e-aa9a-7572643821b7 to disappear
Dec 18 14:19:19.142: INFO: Pod projected-volume-6447380d-968b-415e-aa9a-7572643821b7 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:19:19.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7465" for this suite.
Dec 18 14:19:25.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:19:25.328: INFO: namespace projected-7465 deletion completed in 6.179420297s

• [SLOW TEST:16.589 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:19:25.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 18 14:19:45.684: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1125 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:19:45.684: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:19:46.183: INFO: Exec stderr: ""
Dec 18 14:19:46.184: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1125 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:19:46.184: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:19:46.500: INFO: Exec stderr: ""
Dec 18 14:19:46.500: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1125 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:19:46.500: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:19:46.896: INFO: Exec stderr: ""
Dec 18 14:19:46.896: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1125 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:19:46.897: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:19:47.275: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 18 14:19:47.276: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1125 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:19:47.276: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:19:47.580: INFO: Exec stderr: ""
Dec 18 14:19:47.580: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1125 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:19:47.581: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:19:47.891: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 18 14:19:47.892: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1125 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:19:47.892: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:19:48.230: INFO: Exec stderr: ""
Dec 18 14:19:48.231: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1125 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:19:48.231: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:19:48.709: INFO: Exec stderr: ""
Dec 18 14:19:48.710: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1125 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:19:48.710: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:19:49.221: INFO: Exec stderr: ""
Dec 18 14:19:49.221: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1125 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:19:49.221: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:19:49.576: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:19:49.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-1125" for this suite.
Dec 18 14:20:41.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:20:41.773: INFO: namespace e2e-kubelet-etc-hosts-1125 deletion completed in 52.188755602s

• [SLOW TEST:76.443 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:20:41.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 18 14:20:42.715: INFO: Pod name wrapped-volume-race-14f4520f-03aa-42da-a51a-c4a40ce1ada9: Found 0 pods out of 5
Dec 18 14:20:47.762: INFO: Pod name wrapped-volume-race-14f4520f-03aa-42da-a51a-c4a40ce1ada9: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-14f4520f-03aa-42da-a51a-c4a40ce1ada9 in namespace emptydir-wrapper-4152, will wait for the garbage collector to delete the pods
Dec 18 14:21:17.899: INFO: Deleting ReplicationController wrapped-volume-race-14f4520f-03aa-42da-a51a-c4a40ce1ada9 took: 19.853245ms
Dec 18 14:21:18.400: INFO: Terminating ReplicationController wrapped-volume-race-14f4520f-03aa-42da-a51a-c4a40ce1ada9 pods took: 500.889575ms
STEP: Creating RC which spawns configmap-volume pods
Dec 18 14:22:07.764: INFO: Pod name wrapped-volume-race-faea54c9-7afe-4248-a36b-683e948200c6: Found 0 pods out of 5
Dec 18 14:22:12.782: INFO: Pod name wrapped-volume-race-faea54c9-7afe-4248-a36b-683e948200c6: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-faea54c9-7afe-4248-a36b-683e948200c6 in namespace emptydir-wrapper-4152, will wait for the garbage collector to delete the pods
Dec 18 14:22:40.909: INFO: Deleting ReplicationController wrapped-volume-race-faea54c9-7afe-4248-a36b-683e948200c6 took: 25.525214ms
Dec 18 14:22:41.310: INFO: Terminating ReplicationController wrapped-volume-race-faea54c9-7afe-4248-a36b-683e948200c6 pods took: 400.438163ms
STEP: Creating RC which spawns configmap-volume pods
Dec 18 14:23:27.367: INFO: Pod name wrapped-volume-race-e3b3a6b4-8e67-44f0-8609-c7c759492b20: Found 0 pods out of 5
Dec 18 14:23:32.382: INFO: Pod name wrapped-volume-race-e3b3a6b4-8e67-44f0-8609-c7c759492b20: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e3b3a6b4-8e67-44f0-8609-c7c759492b20 in namespace emptydir-wrapper-4152, will wait for the garbage collector to delete the pods
Dec 18 14:24:06.513: INFO: Deleting ReplicationController wrapped-volume-race-e3b3a6b4-8e67-44f0-8609-c7c759492b20 took: 27.496636ms
Dec 18 14:24:06.915: INFO: Terminating ReplicationController wrapped-volume-race-e3b3a6b4-8e67-44f0-8609-c7c759492b20 pods took: 401.319863ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:24:57.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4152" for this suite.
Dec 18 14:25:07.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:25:08.061: INFO: namespace emptydir-wrapper-4152 deletion completed in 10.189524969s

• [SLOW TEST:266.287 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:25:08.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:25:42.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6795" for this suite.
Dec 18 14:25:48.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:25:48.474: INFO: namespace namespaces-6795 deletion completed in 6.127121109s
STEP: Destroying namespace "nsdeletetest-5683" for this suite.
Dec 18 14:25:48.477: INFO: Namespace nsdeletetest-5683 was already deleted
STEP: Destroying namespace "nsdeletetest-8751" for this suite.
Dec 18 14:25:54.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:25:54.697: INFO: namespace nsdeletetest-8751 deletion completed in 6.219847723s

• [SLOW TEST:46.635 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:25:54.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 18 14:26:13.173: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 18 14:26:13.183: INFO: Pod pod-with-prestop-http-hook still exists
Dec 18 14:26:15.184: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 18 14:26:15.200: INFO: Pod pod-with-prestop-http-hook still exists
Dec 18 14:26:17.184: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 18 14:26:17.197: INFO: Pod pod-with-prestop-http-hook still exists
Dec 18 14:26:19.184: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 18 14:26:19.194: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:26:19.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5971" for this suite.
Dec 18 14:26:53.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:26:53.455: INFO: namespace container-lifecycle-hook-5971 deletion completed in 34.21562431s

• [SLOW TEST:58.758 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:26:53.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Dec 18 14:26:53.693: INFO: Waiting up to 5m0s for pod "var-expansion-a6581376-7725-4265-830d-68540d373a6f" in namespace "var-expansion-2711" to be "success or failure"
Dec 18 14:26:53.731: INFO: Pod "var-expansion-a6581376-7725-4265-830d-68540d373a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 37.718625ms
Dec 18 14:26:55.743: INFO: Pod "var-expansion-a6581376-7725-4265-830d-68540d373a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049673762s
Dec 18 14:26:57.805: INFO: Pod "var-expansion-a6581376-7725-4265-830d-68540d373a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111091011s
Dec 18 14:26:59.815: INFO: Pod "var-expansion-a6581376-7725-4265-830d-68540d373a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12121797s
Dec 18 14:27:01.851: INFO: Pod "var-expansion-a6581376-7725-4265-830d-68540d373a6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.157267951s
STEP: Saw pod success
Dec 18 14:27:01.851: INFO: Pod "var-expansion-a6581376-7725-4265-830d-68540d373a6f" satisfied condition "success or failure"
Dec 18 14:27:01.877: INFO: Trying to get logs from node iruya-node pod var-expansion-a6581376-7725-4265-830d-68540d373a6f container dapi-container: 
STEP: delete the pod
Dec 18 14:27:02.250: INFO: Waiting for pod var-expansion-a6581376-7725-4265-830d-68540d373a6f to disappear
Dec 18 14:27:02.299: INFO: Pod var-expansion-a6581376-7725-4265-830d-68540d373a6f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:27:02.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2711" for this suite.
Dec 18 14:27:08.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:27:08.669: INFO: namespace var-expansion-2711 deletion completed in 6.341861215s

• [SLOW TEST:15.214 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:27:08.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 18 14:27:17.474: INFO: Successfully updated pod "pod-update-b8b8e027-8a8d-4bd8-b066-7a375a257137"
STEP: verifying the updated pod is in kubernetes
Dec 18 14:27:17.625: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:27:17.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9987" for this suite.
Dec 18 14:27:39.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:27:39.835: INFO: namespace pods-9987 deletion completed in 22.194399603s

• [SLOW TEST:31.165 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:27:39.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Dec 18 14:27:39.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2837'
Dec 18 14:27:42.924: INFO: stderr: ""
Dec 18 14:27:42.925: INFO: stdout: "pod/pause created\n"
Dec 18 14:27:42.925: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Dec 18 14:27:42.926: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2837" to be "running and ready"
Dec 18 14:27:43.039: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 112.903212ms
Dec 18 14:27:45.056: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130041645s
Dec 18 14:27:47.100: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173937485s
Dec 18 14:27:49.110: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183794294s
Dec 18 14:27:51.117: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.191590402s
Dec 18 14:27:51.118: INFO: Pod "pause" satisfied condition "running and ready"
Dec 18 14:27:51.118: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Dec 18 14:27:51.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2837'
Dec 18 14:27:51.322: INFO: stderr: ""
Dec 18 14:27:51.322: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Dec 18 14:27:51.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2837'
Dec 18 14:27:51.485: INFO: stderr: ""
Dec 18 14:27:51.485: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Dec 18 14:27:51.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2837'
Dec 18 14:27:51.689: INFO: stderr: ""
Dec 18 14:27:51.689: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Dec 18 14:27:51.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2837'
Dec 18 14:27:51.817: INFO: stderr: ""
Dec 18 14:27:51.818: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Dec 18 14:27:51.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2837'
Dec 18 14:27:52.085: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 18 14:27:52.085: INFO: stdout: "pod \"pause\" force deleted\n"
Dec 18 14:27:52.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2837'
Dec 18 14:27:52.461: INFO: stderr: "No resources found.\n"
Dec 18 14:27:52.461: INFO: stdout: ""
Dec 18 14:27:52.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2837 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 18 14:27:52.585: INFO: stderr: ""
Dec 18 14:27:52.585: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:27:52.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2837" for this suite.
Dec 18 14:27:58.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:27:58.708: INFO: namespace kubectl-2837 deletion completed in 6.113406925s

• [SLOW TEST:18.872 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:27:58.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 18 14:27:58.902: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:28:16.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2358" for this suite.
Dec 18 14:28:38.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:28:38.656: INFO: namespace init-container-2358 deletion completed in 22.184827021s

• [SLOW TEST:39.947 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:28:38.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 14:28:38.767: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ed1fa85-3b91-4fee-9f89-0592182164fc" in namespace "downward-api-3467" to be "success or failure"
Dec 18 14:28:38.797: INFO: Pod "downwardapi-volume-4ed1fa85-3b91-4fee-9f89-0592182164fc": Phase="Pending", Reason="", readiness=false. Elapsed: 29.954582ms
Dec 18 14:28:40.805: INFO: Pod "downwardapi-volume-4ed1fa85-3b91-4fee-9f89-0592182164fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037471354s
Dec 18 14:28:42.811: INFO: Pod "downwardapi-volume-4ed1fa85-3b91-4fee-9f89-0592182164fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043219203s
Dec 18 14:28:45.002: INFO: Pod "downwardapi-volume-4ed1fa85-3b91-4fee-9f89-0592182164fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.234641896s
Dec 18 14:28:47.009: INFO: Pod "downwardapi-volume-4ed1fa85-3b91-4fee-9f89-0592182164fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.241777751s
Dec 18 14:28:49.017: INFO: Pod "downwardapi-volume-4ed1fa85-3b91-4fee-9f89-0592182164fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.24922299s
STEP: Saw pod success
Dec 18 14:28:49.017: INFO: Pod "downwardapi-volume-4ed1fa85-3b91-4fee-9f89-0592182164fc" satisfied condition "success or failure"
Dec 18 14:28:49.020: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4ed1fa85-3b91-4fee-9f89-0592182164fc container client-container: 
STEP: delete the pod
Dec 18 14:28:49.064: INFO: Waiting for pod downwardapi-volume-4ed1fa85-3b91-4fee-9f89-0592182164fc to disappear
Dec 18 14:28:49.194: INFO: Pod downwardapi-volume-4ed1fa85-3b91-4fee-9f89-0592182164fc no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:28:49.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3467" for this suite.
Dec 18 14:28:55.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:28:55.457: INFO: namespace downward-api-3467 deletion completed in 6.252636165s

• [SLOW TEST:16.801 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:28:55.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Dec 18 14:28:55.594: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5539" to be "success or failure"
Dec 18 14:28:55.641: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 47.309589ms
Dec 18 14:28:57.659: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065477502s
Dec 18 14:28:59.678: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083782658s
Dec 18 14:29:01.690: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095894263s
Dec 18 14:29:03.717: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12267086s
Dec 18 14:29:05.734: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.139813742s
Dec 18 14:29:07.747: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.15298471s
STEP: Saw pod success
Dec 18 14:29:07.747: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 18 14:29:07.752: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 18 14:29:07.813: INFO: Waiting for pod pod-host-path-test to disappear
Dec 18 14:29:07.822: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:29:07.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5539" for this suite.
Dec 18 14:29:13.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:29:14.037: INFO: namespace hostpath-5539 deletion completed in 6.204837374s

• [SLOW TEST:18.579 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:29:14.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-dde001cf-2ca4-4535-baf7-b2ae50cd9af8
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:29:14.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1282" for this suite.
Dec 18 14:29:20.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:29:20.332: INFO: namespace configmap-1282 deletion completed in 6.223003609s

• [SLOW TEST:6.294 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:29:20.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-d97c0175-c6a0-4caa-9f93-d6cd88c9b160 in namespace container-probe-8044
Dec 18 14:29:30.500: INFO: Started pod test-webserver-d97c0175-c6a0-4caa-9f93-d6cd88c9b160 in namespace container-probe-8044
STEP: checking the pod's current state and verifying that restartCount is present
Dec 18 14:29:30.508: INFO: Initial restart count of pod test-webserver-d97c0175-c6a0-4caa-9f93-d6cd88c9b160 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:33:32.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8044" for this suite.
Dec 18 14:33:38.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:33:38.666: INFO: namespace container-probe-8044 deletion completed in 6.267159214s

• [SLOW TEST:258.334 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:33:38.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 14:33:38.838: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f94e8d17-2d1f-4a55-a86a-d37a3160feb7" in namespace "projected-929" to be "success or failure"
Dec 18 14:33:38.875: INFO: Pod "downwardapi-volume-f94e8d17-2d1f-4a55-a86a-d37a3160feb7": Phase="Pending", Reason="", readiness=false. Elapsed: 36.793079ms
Dec 18 14:33:40.886: INFO: Pod "downwardapi-volume-f94e8d17-2d1f-4a55-a86a-d37a3160feb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046994392s
Dec 18 14:33:42.903: INFO: Pod "downwardapi-volume-f94e8d17-2d1f-4a55-a86a-d37a3160feb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064817922s
Dec 18 14:33:44.910: INFO: Pod "downwardapi-volume-f94e8d17-2d1f-4a55-a86a-d37a3160feb7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071526875s
Dec 18 14:33:46.921: INFO: Pod "downwardapi-volume-f94e8d17-2d1f-4a55-a86a-d37a3160feb7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082466633s
Dec 18 14:33:48.935: INFO: Pod "downwardapi-volume-f94e8d17-2d1f-4a55-a86a-d37a3160feb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096129908s
STEP: Saw pod success
Dec 18 14:33:48.935: INFO: Pod "downwardapi-volume-f94e8d17-2d1f-4a55-a86a-d37a3160feb7" satisfied condition "success or failure"
Dec 18 14:33:48.941: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f94e8d17-2d1f-4a55-a86a-d37a3160feb7 container client-container: 
STEP: delete the pod
Dec 18 14:33:49.126: INFO: Waiting for pod downwardapi-volume-f94e8d17-2d1f-4a55-a86a-d37a3160feb7 to disappear
Dec 18 14:33:49.140: INFO: Pod downwardapi-volume-f94e8d17-2d1f-4a55-a86a-d37a3160feb7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:33:49.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-929" for this suite.
Dec 18 14:33:55.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:33:55.323: INFO: namespace projected-929 deletion completed in 6.175767623s

• [SLOW TEST:16.657 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:33:55.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:34:55.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7264" for this suite.
Dec 18 14:35:17.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:35:17.729: INFO: namespace container-probe-7264 deletion completed in 22.21157646s

• [SLOW TEST:82.402 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:35:17.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4869
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 18 14:35:17.897: INFO: Found 0 stateful pods, waiting for 3
Dec 18 14:35:27.917: INFO: Found 2 stateful pods, waiting for 3
Dec 18 14:35:37.912: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 14:35:37.912: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 14:35:37.912: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 18 14:35:47.920: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 14:35:47.920: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 14:35:47.920: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 18 14:35:47.992: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 18 14:35:58.063: INFO: Updating stateful set ss2
Dec 18 14:35:58.136: INFO: Waiting for Pod statefulset-4869/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Dec 18 14:36:08.520: INFO: Found 2 stateful pods, waiting for 3
Dec 18 14:36:18.546: INFO: Found 2 stateful pods, waiting for 3
Dec 18 14:36:28.542: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 14:36:28.542: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 14:36:28.542: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 18 14:36:38.535: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 14:36:38.535: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 18 14:36:38.535: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 18 14:36:38.575: INFO: Updating stateful set ss2
Dec 18 14:36:38.620: INFO: Waiting for Pod statefulset-4869/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 14:36:49.827: INFO: Updating stateful set ss2
Dec 18 14:36:49.948: INFO: Waiting for StatefulSet statefulset-4869/ss2 to complete update
Dec 18 14:36:49.948: INFO: Waiting for Pod statefulset-4869/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 14:36:59.963: INFO: Waiting for StatefulSet statefulset-4869/ss2 to complete update
Dec 18 14:36:59.963: INFO: Waiting for Pod statefulset-4869/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 18 14:37:09.971: INFO: Waiting for StatefulSet statefulset-4869/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 18 14:37:19.970: INFO: Deleting all statefulset in ns statefulset-4869
Dec 18 14:37:19.973: INFO: Scaling statefulset ss2 to 0
Dec 18 14:38:00.036: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 14:38:00.043: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:38:00.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4869" for this suite.
Dec 18 14:38:08.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:38:08.239: INFO: namespace statefulset-4869 deletion completed in 8.161488113s

• [SLOW TEST:170.509 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:38:08.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-b7e12e9d-a8ab-466d-94d8-600b139306c2
STEP: Creating a pod to test consume configMaps
Dec 18 14:38:08.393: INFO: Waiting up to 5m0s for pod "pod-configmaps-0814a8b7-1557-4ef2-9c3e-5e9ad3335b88" in namespace "configmap-7933" to be "success or failure"
Dec 18 14:38:08.409: INFO: Pod "pod-configmaps-0814a8b7-1557-4ef2-9c3e-5e9ad3335b88": Phase="Pending", Reason="", readiness=false. Elapsed: 15.589677ms
Dec 18 14:38:10.428: INFO: Pod "pod-configmaps-0814a8b7-1557-4ef2-9c3e-5e9ad3335b88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034517341s
Dec 18 14:38:12.440: INFO: Pod "pod-configmaps-0814a8b7-1557-4ef2-9c3e-5e9ad3335b88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047133405s
Dec 18 14:38:14.448: INFO: Pod "pod-configmaps-0814a8b7-1557-4ef2-9c3e-5e9ad3335b88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054462453s
Dec 18 14:38:16.460: INFO: Pod "pod-configmaps-0814a8b7-1557-4ef2-9c3e-5e9ad3335b88": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067318133s
Dec 18 14:38:18.532: INFO: Pod "pod-configmaps-0814a8b7-1557-4ef2-9c3e-5e9ad3335b88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.139304687s
STEP: Saw pod success
Dec 18 14:38:18.533: INFO: Pod "pod-configmaps-0814a8b7-1557-4ef2-9c3e-5e9ad3335b88" satisfied condition "success or failure"
Dec 18 14:38:18.538: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0814a8b7-1557-4ef2-9c3e-5e9ad3335b88 container configmap-volume-test: 
STEP: delete the pod
Dec 18 14:38:18.690: INFO: Waiting for pod pod-configmaps-0814a8b7-1557-4ef2-9c3e-5e9ad3335b88 to disappear
Dec 18 14:38:18.721: INFO: Pod pod-configmaps-0814a8b7-1557-4ef2-9c3e-5e9ad3335b88 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:38:18.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7933" for this suite.
Dec 18 14:38:24.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:38:25.054: INFO: namespace configmap-7933 deletion completed in 6.300126977s

• [SLOW TEST:16.815 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:38:25.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 18 14:38:25.208: INFO: Waiting up to 5m0s for pod "downward-api-31b9974d-e7ab-4b36-9dca-11416eb14b57" in namespace "downward-api-6946" to be "success or failure"
Dec 18 14:38:25.294: INFO: Pod "downward-api-31b9974d-e7ab-4b36-9dca-11416eb14b57": Phase="Pending", Reason="", readiness=false. Elapsed: 85.264037ms
Dec 18 14:38:27.305: INFO: Pod "downward-api-31b9974d-e7ab-4b36-9dca-11416eb14b57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096980541s
Dec 18 14:38:29.536: INFO: Pod "downward-api-31b9974d-e7ab-4b36-9dca-11416eb14b57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327395479s
Dec 18 14:38:31.549: INFO: Pod "downward-api-31b9974d-e7ab-4b36-9dca-11416eb14b57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.340798551s
Dec 18 14:38:33.556: INFO: Pod "downward-api-31b9974d-e7ab-4b36-9dca-11416eb14b57": Phase="Pending", Reason="", readiness=false. Elapsed: 8.347853767s
Dec 18 14:38:35.568: INFO: Pod "downward-api-31b9974d-e7ab-4b36-9dca-11416eb14b57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.35920423s
STEP: Saw pod success
Dec 18 14:38:35.568: INFO: Pod "downward-api-31b9974d-e7ab-4b36-9dca-11416eb14b57" satisfied condition "success or failure"
Dec 18 14:38:35.575: INFO: Trying to get logs from node iruya-node pod downward-api-31b9974d-e7ab-4b36-9dca-11416eb14b57 container dapi-container: 
STEP: delete the pod
Dec 18 14:38:35.760: INFO: Waiting for pod downward-api-31b9974d-e7ab-4b36-9dca-11416eb14b57 to disappear
Dec 18 14:38:35.780: INFO: Pod downward-api-31b9974d-e7ab-4b36-9dca-11416eb14b57 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:38:35.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6946" for this suite.
Dec 18 14:38:43.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:38:44.031: INFO: namespace downward-api-6946 deletion completed in 8.239944347s

• [SLOW TEST:18.977 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:38:44.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 14:38:44.197: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c1cb669-faa0-4899-a181-d9664263b58a" in namespace "projected-8924" to be "success or failure"
Dec 18 14:38:44.238: INFO: Pod "downwardapi-volume-8c1cb669-faa0-4899-a181-d9664263b58a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.09628ms
Dec 18 14:38:46.246: INFO: Pod "downwardapi-volume-8c1cb669-faa0-4899-a181-d9664263b58a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048641212s
Dec 18 14:38:48.366: INFO: Pod "downwardapi-volume-8c1cb669-faa0-4899-a181-d9664263b58a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168152611s
Dec 18 14:38:50.376: INFO: Pod "downwardapi-volume-8c1cb669-faa0-4899-a181-d9664263b58a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178304424s
Dec 18 14:38:52.387: INFO: Pod "downwardapi-volume-8c1cb669-faa0-4899-a181-d9664263b58a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.189091938s
Dec 18 14:38:54.397: INFO: Pod "downwardapi-volume-8c1cb669-faa0-4899-a181-d9664263b58a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.1996856s
STEP: Saw pod success
Dec 18 14:38:54.397: INFO: Pod "downwardapi-volume-8c1cb669-faa0-4899-a181-d9664263b58a" satisfied condition "success or failure"
Dec 18 14:38:54.401: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8c1cb669-faa0-4899-a181-d9664263b58a container client-container: 
STEP: delete the pod
Dec 18 14:38:54.621: INFO: Waiting for pod downwardapi-volume-8c1cb669-faa0-4899-a181-d9664263b58a to disappear
Dec 18 14:38:54.635: INFO: Pod downwardapi-volume-8c1cb669-faa0-4899-a181-d9664263b58a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:38:54.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8924" for this suite.
Dec 18 14:39:00.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:39:00.931: INFO: namespace projected-8924 deletion completed in 6.238067998s

• [SLOW TEST:16.899 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:39:00.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-361b5e89-ac8c-4ff0-8da6-1a661946c922 in namespace container-probe-3306
Dec 18 14:39:09.137: INFO: Started pod busybox-361b5e89-ac8c-4ff0-8da6-1a661946c922 in namespace container-probe-3306
STEP: checking the pod's current state and verifying that restartCount is present
Dec 18 14:39:09.141: INFO: Initial restart count of pod busybox-361b5e89-ac8c-4ff0-8da6-1a661946c922 is 0
Dec 18 14:40:01.512: INFO: Restart count of pod container-probe-3306/busybox-361b5e89-ac8c-4ff0-8da6-1a661946c922 is now 1 (52.371427781s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:40:01.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3306" for this suite.
Dec 18 14:40:07.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:40:07.931: INFO: namespace container-probe-3306 deletion completed in 6.340627664s

• [SLOW TEST:66.998 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:40:07.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-a8b1ee31-e987-4985-bd4d-ed2b9cb5f93f
STEP: Creating a pod to test consume configMaps
Dec 18 14:40:08.106: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-db2db776-7041-48e8-8ac9-6d1c427f821b" in namespace "projected-3020" to be "success or failure"
Dec 18 14:40:08.111: INFO: Pod "pod-projected-configmaps-db2db776-7041-48e8-8ac9-6d1c427f821b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290147ms
Dec 18 14:40:10.117: INFO: Pod "pod-projected-configmaps-db2db776-7041-48e8-8ac9-6d1c427f821b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009854946s
Dec 18 14:40:12.125: INFO: Pod "pod-projected-configmaps-db2db776-7041-48e8-8ac9-6d1c427f821b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018514804s
Dec 18 14:40:14.147: INFO: Pod "pod-projected-configmaps-db2db776-7041-48e8-8ac9-6d1c427f821b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040220012s
Dec 18 14:40:16.163: INFO: Pod "pod-projected-configmaps-db2db776-7041-48e8-8ac9-6d1c427f821b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05620616s
Dec 18 14:40:18.205: INFO: Pod "pod-projected-configmaps-db2db776-7041-48e8-8ac9-6d1c427f821b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097873803s
STEP: Saw pod success
Dec 18 14:40:18.205: INFO: Pod "pod-projected-configmaps-db2db776-7041-48e8-8ac9-6d1c427f821b" satisfied condition "success or failure"
Dec 18 14:40:18.211: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-db2db776-7041-48e8-8ac9-6d1c427f821b container projected-configmap-volume-test: 
STEP: delete the pod
Dec 18 14:40:18.303: INFO: Waiting for pod pod-projected-configmaps-db2db776-7041-48e8-8ac9-6d1c427f821b to disappear
Dec 18 14:40:18.359: INFO: Pod pod-projected-configmaps-db2db776-7041-48e8-8ac9-6d1c427f821b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:40:18.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3020" for this suite.
Dec 18 14:40:24.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:40:24.668: INFO: namespace projected-3020 deletion completed in 6.299719525s

• [SLOW TEST:16.735 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:40:24.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 18 14:40:25.254: INFO: Waiting up to 5m0s for pod "pod-3b6b9dd0-9c17-4467-aebf-b64b0d99b999" in namespace "emptydir-9792" to be "success or failure"
Dec 18 14:40:25.285: INFO: Pod "pod-3b6b9dd0-9c17-4467-aebf-b64b0d99b999": Phase="Pending", Reason="", readiness=false. Elapsed: 30.675464ms
Dec 18 14:40:27.303: INFO: Pod "pod-3b6b9dd0-9c17-4467-aebf-b64b0d99b999": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048349614s
Dec 18 14:40:29.317: INFO: Pod "pod-3b6b9dd0-9c17-4467-aebf-b64b0d99b999": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06209674s
Dec 18 14:40:31.332: INFO: Pod "pod-3b6b9dd0-9c17-4467-aebf-b64b0d99b999": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077188747s
Dec 18 14:40:33.356: INFO: Pod "pod-3b6b9dd0-9c17-4467-aebf-b64b0d99b999": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101503464s
STEP: Saw pod success
Dec 18 14:40:33.357: INFO: Pod "pod-3b6b9dd0-9c17-4467-aebf-b64b0d99b999" satisfied condition "success or failure"
Dec 18 14:40:33.365: INFO: Trying to get logs from node iruya-node pod pod-3b6b9dd0-9c17-4467-aebf-b64b0d99b999 container test-container: 
STEP: delete the pod
Dec 18 14:40:33.453: INFO: Waiting for pod pod-3b6b9dd0-9c17-4467-aebf-b64b0d99b999 to disappear
Dec 18 14:40:33.465: INFO: Pod pod-3b6b9dd0-9c17-4467-aebf-b64b0d99b999 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:40:33.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9792" for this suite.
Dec 18 14:40:39.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:40:39.830: INFO: namespace emptydir-9792 deletion completed in 6.359574847s

• [SLOW TEST:15.159 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:40:39.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 18 14:40:39.957: INFO: Waiting up to 5m0s for pod "pod-5a648d1e-2c94-49c9-9204-a2f5964a244e" in namespace "emptydir-8476" to be "success or failure"
Dec 18 14:40:39.991: INFO: Pod "pod-5a648d1e-2c94-49c9-9204-a2f5964a244e": Phase="Pending", Reason="", readiness=false. Elapsed: 34.080516ms
Dec 18 14:40:42.000: INFO: Pod "pod-5a648d1e-2c94-49c9-9204-a2f5964a244e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042976523s
Dec 18 14:40:44.017: INFO: Pod "pod-5a648d1e-2c94-49c9-9204-a2f5964a244e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060773717s
Dec 18 14:40:46.028: INFO: Pod "pod-5a648d1e-2c94-49c9-9204-a2f5964a244e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071504641s
Dec 18 14:40:48.040: INFO: Pod "pod-5a648d1e-2c94-49c9-9204-a2f5964a244e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083384842s
Dec 18 14:40:50.047: INFO: Pod "pod-5a648d1e-2c94-49c9-9204-a2f5964a244e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09067727s
STEP: Saw pod success
Dec 18 14:40:50.047: INFO: Pod "pod-5a648d1e-2c94-49c9-9204-a2f5964a244e" satisfied condition "success or failure"
Dec 18 14:40:50.051: INFO: Trying to get logs from node iruya-node pod pod-5a648d1e-2c94-49c9-9204-a2f5964a244e container test-container: 
STEP: delete the pod
Dec 18 14:40:50.138: INFO: Waiting for pod pod-5a648d1e-2c94-49c9-9204-a2f5964a244e to disappear
Dec 18 14:40:50.145: INFO: Pod pod-5a648d1e-2c94-49c9-9204-a2f5964a244e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:40:50.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8476" for this suite.
Dec 18 14:40:56.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:40:56.333: INFO: namespace emptydir-8476 deletion completed in 6.182330802s

• [SLOW TEST:16.502 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:40:56.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 18 14:40:56.426: INFO: namespace kubectl-717
Dec 18 14:40:56.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-717'
Dec 18 14:40:58.762: INFO: stderr: ""
Dec 18 14:40:58.763: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 18 14:40:59.781: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 14:40:59.781: INFO: Found 0 / 1
Dec 18 14:41:00.770: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 14:41:00.770: INFO: Found 0 / 1
Dec 18 14:41:01.770: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 14:41:01.770: INFO: Found 0 / 1
Dec 18 14:41:02.776: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 14:41:02.776: INFO: Found 0 / 1
Dec 18 14:41:03.788: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 14:41:03.788: INFO: Found 0 / 1
Dec 18 14:41:04.778: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 14:41:04.778: INFO: Found 0 / 1
Dec 18 14:41:05.865: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 14:41:05.866: INFO: Found 0 / 1
Dec 18 14:41:06.773: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 14:41:06.773: INFO: Found 0 / 1
Dec 18 14:41:07.777: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 14:41:07.777: INFO: Found 1 / 1
Dec 18 14:41:07.778: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 18 14:41:07.785: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 14:41:07.785: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 18 14:41:07.785: INFO: wait on redis-master startup in kubectl-717 
Dec 18 14:41:07.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-v88fx redis-master --namespace=kubectl-717'
Dec 18 14:41:08.069: INFO: stderr: ""
Dec 18 14:41:08.069: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 18 Dec 14:41:06.332 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Dec 14:41:06.332 # Server started, Redis version 3.2.12\n1:M 18 Dec 14:41:06.333 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Dec 14:41:06.333 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 18 14:41:08.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-717'
Dec 18 14:41:08.377: INFO: stderr: ""
Dec 18 14:41:08.377: INFO: stdout: "service/rm2 exposed\n"
Dec 18 14:41:10.066: INFO: Service rm2 in namespace kubectl-717 found.
STEP: exposing service
Dec 18 14:41:12.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-717'
Dec 18 14:41:12.356: INFO: stderr: ""
Dec 18 14:41:12.356: INFO: stdout: "service/rm3 exposed\n"
Dec 18 14:41:12.421: INFO: Service rm3 in namespace kubectl-717 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:41:14.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-717" for this suite.
Dec 18 14:41:52.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:41:52.790: INFO: namespace kubectl-717 deletion completed in 38.295230541s

• [SLOW TEST:56.457 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:41:52.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 14:42:03.018: INFO: Waiting up to 5m0s for pod "client-envvars-2f9b6feb-67f1-481f-b3eb-ae4fa050d83c" in namespace "pods-4395" to be "success or failure"
Dec 18 14:42:03.032: INFO: Pod "client-envvars-2f9b6feb-67f1-481f-b3eb-ae4fa050d83c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.900517ms
Dec 18 14:42:05.042: INFO: Pod "client-envvars-2f9b6feb-67f1-481f-b3eb-ae4fa050d83c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023283426s
Dec 18 14:42:07.054: INFO: Pod "client-envvars-2f9b6feb-67f1-481f-b3eb-ae4fa050d83c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035516322s
Dec 18 14:42:09.063: INFO: Pod "client-envvars-2f9b6feb-67f1-481f-b3eb-ae4fa050d83c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044256781s
Dec 18 14:42:11.070: INFO: Pod "client-envvars-2f9b6feb-67f1-481f-b3eb-ae4fa050d83c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051088591s
STEP: Saw pod success
Dec 18 14:42:11.070: INFO: Pod "client-envvars-2f9b6feb-67f1-481f-b3eb-ae4fa050d83c" satisfied condition "success or failure"
Dec 18 14:42:11.073: INFO: Trying to get logs from node iruya-node pod client-envvars-2f9b6feb-67f1-481f-b3eb-ae4fa050d83c container env3cont: 
STEP: delete the pod
Dec 18 14:42:11.241: INFO: Waiting for pod client-envvars-2f9b6feb-67f1-481f-b3eb-ae4fa050d83c to disappear
Dec 18 14:42:11.257: INFO: Pod client-envvars-2f9b6feb-67f1-481f-b3eb-ae4fa050d83c no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:42:11.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4395" for this suite.
Dec 18 14:42:57.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:42:57.617: INFO: namespace pods-4395 deletion completed in 46.339469961s

• [SLOW TEST:64.826 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:42:57.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:43:07.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6146" for this suite.
Dec 18 14:43:59.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:44:00.112: INFO: namespace kubelet-test-6146 deletion completed in 52.295874064s

• [SLOW TEST:62.494 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:44:00.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 14:44:24.336: INFO: Container started at 2019-12-18 14:44:07 +0000 UTC, pod became ready at 2019-12-18 14:44:23 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:44:24.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2326" for this suite.
Dec 18 14:44:48.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:44:48.498: INFO: namespace container-probe-2326 deletion completed in 24.152936185s

• [SLOW TEST:48.385 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:44:48.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 18 14:44:48.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5203'
Dec 18 14:44:49.071: INFO: stderr: ""
Dec 18 14:44:49.071: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 18 14:44:49.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5203'
Dec 18 14:44:49.325: INFO: stderr: ""
Dec 18 14:44:49.325: INFO: stdout: "update-demo-nautilus-7gczq update-demo-nautilus-7p2br "
Dec 18 14:44:49.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gczq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5203'
Dec 18 14:44:49.598: INFO: stderr: ""
Dec 18 14:44:49.599: INFO: stdout: ""
Dec 18 14:44:49.599: INFO: update-demo-nautilus-7gczq is created but not running
Dec 18 14:44:54.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5203'
Dec 18 14:44:55.651: INFO: stderr: ""
Dec 18 14:44:55.651: INFO: stdout: "update-demo-nautilus-7gczq update-demo-nautilus-7p2br "
Dec 18 14:44:55.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gczq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5203'
Dec 18 14:44:56.004: INFO: stderr: ""
Dec 18 14:44:56.004: INFO: stdout: ""
Dec 18 14:44:56.004: INFO: update-demo-nautilus-7gczq is created but not running
Dec 18 14:45:01.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5203'
Dec 18 14:45:01.203: INFO: stderr: ""
Dec 18 14:45:01.203: INFO: stdout: "update-demo-nautilus-7gczq update-demo-nautilus-7p2br "
Dec 18 14:45:01.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gczq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5203'
Dec 18 14:45:01.311: INFO: stderr: ""
Dec 18 14:45:01.311: INFO: stdout: "true"
Dec 18 14:45:01.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gczq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5203'
Dec 18 14:45:01.480: INFO: stderr: ""
Dec 18 14:45:01.480: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 14:45:01.480: INFO: validating pod update-demo-nautilus-7gczq
Dec 18 14:45:01.537: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 14:45:01.537: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 14:45:01.537: INFO: update-demo-nautilus-7gczq is verified up and running
Dec 18 14:45:01.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7p2br -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5203'
Dec 18 14:45:01.731: INFO: stderr: ""
Dec 18 14:45:01.732: INFO: stdout: "true"
Dec 18 14:45:01.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7p2br -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5203'
Dec 18 14:45:01.841: INFO: stderr: ""
Dec 18 14:45:01.842: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 14:45:01.842: INFO: validating pod update-demo-nautilus-7p2br
Dec 18 14:45:01.871: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 14:45:01.871: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 14:45:01.871: INFO: update-demo-nautilus-7p2br is verified up and running
STEP: using delete to clean up resources
Dec 18 14:45:01.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5203'
Dec 18 14:45:02.008: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 18 14:45:02.008: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 18 14:45:02.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5203'
Dec 18 14:45:04.140: INFO: stderr: "No resources found.\n"
Dec 18 14:45:04.140: INFO: stdout: ""
Dec 18 14:45:04.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5203 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 18 14:45:04.612: INFO: stderr: ""
Dec 18 14:45:04.613: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:45:04.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5203" for this suite.
Dec 18 14:45:26.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:45:26.757: INFO: namespace kubectl-5203 deletion completed in 22.112323456s

• [SLOW TEST:38.259 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:45:26.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-f387fb6f-5390-4033-8cf0-e28e13d7ccbd
STEP: Creating a pod to test consume configMaps
Dec 18 14:45:26.914: INFO: Waiting up to 5m0s for pod "pod-configmaps-1afe9728-e3aa-4a80-b2a1-bf235eeb2374" in namespace "configmap-9307" to be "success or failure"
Dec 18 14:45:26.937: INFO: Pod "pod-configmaps-1afe9728-e3aa-4a80-b2a1-bf235eeb2374": Phase="Pending", Reason="", readiness=false. Elapsed: 23.301802ms
Dec 18 14:45:28.948: INFO: Pod "pod-configmaps-1afe9728-e3aa-4a80-b2a1-bf235eeb2374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034093657s
Dec 18 14:45:30.964: INFO: Pod "pod-configmaps-1afe9728-e3aa-4a80-b2a1-bf235eeb2374": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05038623s
Dec 18 14:45:33.636: INFO: Pod "pod-configmaps-1afe9728-e3aa-4a80-b2a1-bf235eeb2374": Phase="Pending", Reason="", readiness=false. Elapsed: 6.722102445s
Dec 18 14:45:35.648: INFO: Pod "pod-configmaps-1afe9728-e3aa-4a80-b2a1-bf235eeb2374": Phase="Pending", Reason="", readiness=false. Elapsed: 8.734493861s
Dec 18 14:45:37.657: INFO: Pod "pod-configmaps-1afe9728-e3aa-4a80-b2a1-bf235eeb2374": Phase="Pending", Reason="", readiness=false. Elapsed: 10.743442867s
Dec 18 14:45:39.665: INFO: Pod "pod-configmaps-1afe9728-e3aa-4a80-b2a1-bf235eeb2374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.751541308s
STEP: Saw pod success
Dec 18 14:45:39.666: INFO: Pod "pod-configmaps-1afe9728-e3aa-4a80-b2a1-bf235eeb2374" satisfied condition "success or failure"
Dec 18 14:45:39.670: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1afe9728-e3aa-4a80-b2a1-bf235eeb2374 container configmap-volume-test: 
STEP: delete the pod
Dec 18 14:45:39.754: INFO: Waiting for pod pod-configmaps-1afe9728-e3aa-4a80-b2a1-bf235eeb2374 to disappear
Dec 18 14:45:39.868: INFO: Pod pod-configmaps-1afe9728-e3aa-4a80-b2a1-bf235eeb2374 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:45:39.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9307" for this suite.
Dec 18 14:45:45.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:45:46.104: INFO: namespace configmap-9307 deletion completed in 6.215529836s

• [SLOW TEST:19.347 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:45:46.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 18 14:45:54.427: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:45:54.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8600" for this suite.
Dec 18 14:46:00.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:46:00.843: INFO: namespace container-runtime-8600 deletion completed in 6.360998972s

• [SLOW TEST:14.738 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:46:00.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-797f
STEP: Creating a pod to test atomic-volume-subpath
Dec 18 14:46:01.171: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-797f" in namespace "subpath-4860" to be "success or failure"
Dec 18 14:46:01.182: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.824089ms
Dec 18 14:46:03.189: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017622799s
Dec 18 14:46:05.199: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027747741s
Dec 18 14:46:10.030: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.858010304s
Dec 18 14:46:12.038: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Running", Reason="", readiness=true. Elapsed: 10.866890992s
Dec 18 14:46:14.050: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Running", Reason="", readiness=true. Elapsed: 12.878354618s
Dec 18 14:46:16.058: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Running", Reason="", readiness=true. Elapsed: 14.886438705s
Dec 18 14:46:18.068: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Running", Reason="", readiness=true. Elapsed: 16.896999511s
Dec 18 14:46:20.076: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Running", Reason="", readiness=true. Elapsed: 18.904126351s
Dec 18 14:46:22.085: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Running", Reason="", readiness=true. Elapsed: 20.913474618s
Dec 18 14:46:24.097: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Running", Reason="", readiness=true. Elapsed: 22.925276877s
Dec 18 14:46:26.108: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Running", Reason="", readiness=true. Elapsed: 24.936153426s
Dec 18 14:46:28.118: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Running", Reason="", readiness=true. Elapsed: 26.946603208s
Dec 18 14:46:30.129: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Running", Reason="", readiness=true. Elapsed: 28.957529232s
Dec 18 14:46:32.137: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Running", Reason="", readiness=true. Elapsed: 30.965516607s
Dec 18 14:46:34.147: INFO: Pod "pod-subpath-test-configmap-797f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.975300609s
STEP: Saw pod success
Dec 18 14:46:34.147: INFO: Pod "pod-subpath-test-configmap-797f" satisfied condition "success or failure"
Dec 18 14:46:34.151: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-797f container test-container-subpath-configmap-797f: 
STEP: delete the pod
Dec 18 14:46:34.231: INFO: Waiting for pod pod-subpath-test-configmap-797f to disappear
Dec 18 14:46:34.247: INFO: Pod pod-subpath-test-configmap-797f no longer exists
STEP: Deleting pod pod-subpath-test-configmap-797f
Dec 18 14:46:34.248: INFO: Deleting pod "pod-subpath-test-configmap-797f" in namespace "subpath-4860"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:46:34.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4860" for this suite.
Dec 18 14:46:40.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:46:41.083: INFO: namespace subpath-4860 deletion completed in 6.816377792s

• [SLOW TEST:40.238 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:46:41.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:46:41.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2510" for this suite.
Dec 18 14:47:03.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:47:03.412: INFO: namespace pods-2510 deletion completed in 22.146667839s

• [SLOW TEST:22.329 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:47:03.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Dec 18 14:47:04.090: INFO: created pod pod-service-account-defaultsa
Dec 18 14:47:04.091: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 18 14:47:04.121: INFO: created pod pod-service-account-mountsa
Dec 18 14:47:04.121: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 18 14:47:04.137: INFO: created pod pod-service-account-nomountsa
Dec 18 14:47:04.137: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 18 14:47:04.155: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 18 14:47:04.155: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 18 14:47:04.323: INFO: created pod pod-service-account-mountsa-mountspec
Dec 18 14:47:04.323: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 18 14:47:04.402: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 18 14:47:04.402: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 18 14:47:05.489: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 18 14:47:05.489: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 18 14:47:05.555: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 18 14:47:05.555: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 18 14:47:06.089: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 18 14:47:06.090: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:47:06.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4603" for this suite.
Dec 18 14:47:40.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:47:40.684: INFO: namespace svcaccounts-4603 deletion completed in 34.558548151s

• [SLOW TEST:37.271 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:47:40.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:47:41.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9458" for this suite.
Dec 18 14:47:47.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:47:47.414: INFO: namespace kubelet-test-9458 deletion completed in 6.280367399s

• [SLOW TEST:6.729 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:47:47.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Dec 18 14:47:47.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4522'
Dec 18 14:47:48.090: INFO: stderr: ""
Dec 18 14:47:48.090: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 18 14:47:48.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4522'
Dec 18 14:47:48.375: INFO: stderr: ""
Dec 18 14:47:48.375: INFO: stdout: "update-demo-nautilus-72k6q update-demo-nautilus-nl92l "
Dec 18 14:47:48.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-72k6q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4522'
Dec 18 14:47:48.529: INFO: stderr: ""
Dec 18 14:47:48.530: INFO: stdout: ""
Dec 18 14:47:48.530: INFO: update-demo-nautilus-72k6q is created but not running
Dec 18 14:47:53.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4522'
Dec 18 14:47:53.803: INFO: stderr: ""
Dec 18 14:47:53.803: INFO: stdout: "update-demo-nautilus-72k6q update-demo-nautilus-nl92l "
Dec 18 14:47:53.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-72k6q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4522'
Dec 18 14:47:54.015: INFO: stderr: ""
Dec 18 14:47:54.015: INFO: stdout: ""
Dec 18 14:47:54.015: INFO: update-demo-nautilus-72k6q is created but not running
Dec 18 14:47:59.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4522'
Dec 18 14:47:59.247: INFO: stderr: ""
Dec 18 14:47:59.248: INFO: stdout: "update-demo-nautilus-72k6q update-demo-nautilus-nl92l "
Dec 18 14:47:59.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-72k6q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4522'
Dec 18 14:47:59.381: INFO: stderr: ""
Dec 18 14:47:59.381: INFO: stdout: "true"
Dec 18 14:47:59.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-72k6q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4522'
Dec 18 14:47:59.495: INFO: stderr: ""
Dec 18 14:47:59.495: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 14:47:59.495: INFO: validating pod update-demo-nautilus-72k6q
Dec 18 14:47:59.503: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 14:47:59.503: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 14:47:59.503: INFO: update-demo-nautilus-72k6q is verified up and running
Dec 18 14:47:59.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nl92l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4522'
Dec 18 14:47:59.622: INFO: stderr: ""
Dec 18 14:47:59.622: INFO: stdout: "true"
Dec 18 14:47:59.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nl92l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4522'
Dec 18 14:47:59.738: INFO: stderr: ""
Dec 18 14:47:59.738: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 14:47:59.738: INFO: validating pod update-demo-nautilus-nl92l
Dec 18 14:47:59.757: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 14:47:59.757: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 14:47:59.757: INFO: update-demo-nautilus-nl92l is verified up and running
STEP: scaling down the replication controller
Dec 18 14:47:59.760: INFO: scanned /root for discovery docs: 
Dec 18 14:47:59.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4522'
Dec 18 14:48:01.005: INFO: stderr: ""
Dec 18 14:48:01.005: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 18 14:48:01.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4522'
Dec 18 14:48:01.244: INFO: stderr: ""
Dec 18 14:48:01.245: INFO: stdout: "update-demo-nautilus-72k6q update-demo-nautilus-nl92l "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 18 14:48:06.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4522'
Dec 18 14:48:06.366: INFO: stderr: ""
Dec 18 14:48:06.366: INFO: stdout: "update-demo-nautilus-72k6q update-demo-nautilus-nl92l "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 18 14:48:11.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4522'
Dec 18 14:48:11.560: INFO: stderr: ""
Dec 18 14:48:11.561: INFO: stdout: "update-demo-nautilus-72k6q update-demo-nautilus-nl92l "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 18 14:48:16.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4522'
Dec 18 14:48:16.751: INFO: stderr: ""
Dec 18 14:48:16.752: INFO: stdout: "update-demo-nautilus-72k6q update-demo-nautilus-nl92l "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 18 14:48:21.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4522'
Dec 18 14:48:21.975: INFO: stderr: ""
Dec 18 14:48:21.975: INFO: stdout: "update-demo-nautilus-nl92l "
Dec 18 14:48:21.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nl92l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4522'
Dec 18 14:48:22.190: INFO: stderr: ""
Dec 18 14:48:22.191: INFO: stdout: "true"
Dec 18 14:48:22.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nl92l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4522'
Dec 18 14:48:22.372: INFO: stderr: ""
Dec 18 14:48:22.372: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 14:48:22.372: INFO: validating pod update-demo-nautilus-nl92l
Dec 18 14:48:22.384: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 14:48:22.385: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 14:48:22.385: INFO: update-demo-nautilus-nl92l is verified up and running
STEP: scaling up the replication controller
Dec 18 14:48:22.388: INFO: scanned /root for discovery docs: 
Dec 18 14:48:22.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4522'
Dec 18 14:48:23.848: INFO: stderr: ""
Dec 18 14:48:23.848: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 18 14:48:23.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4522'
Dec 18 14:48:24.196: INFO: stderr: ""
Dec 18 14:48:24.196: INFO: stdout: "update-demo-nautilus-dlr2t update-demo-nautilus-nl92l "
Dec 18 14:48:24.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dlr2t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4522'
Dec 18 14:48:24.345: INFO: stderr: ""
Dec 18 14:48:24.346: INFO: stdout: ""
Dec 18 14:48:24.346: INFO: update-demo-nautilus-dlr2t is created but not running
Dec 18 14:48:29.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4522'
Dec 18 14:48:29.565: INFO: stderr: ""
Dec 18 14:48:29.565: INFO: stdout: "update-demo-nautilus-dlr2t update-demo-nautilus-nl92l "
Dec 18 14:48:29.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dlr2t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4522'
Dec 18 14:48:29.726: INFO: stderr: ""
Dec 18 14:48:29.726: INFO: stdout: ""
Dec 18 14:48:29.726: INFO: update-demo-nautilus-dlr2t is created but not running
Dec 18 14:48:34.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4522'
Dec 18 14:48:34.909: INFO: stderr: ""
Dec 18 14:48:34.909: INFO: stdout: "update-demo-nautilus-dlr2t update-demo-nautilus-nl92l "
Dec 18 14:48:34.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dlr2t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4522'
Dec 18 14:48:35.041: INFO: stderr: ""
Dec 18 14:48:35.041: INFO: stdout: "true"
Dec 18 14:48:35.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dlr2t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4522'
Dec 18 14:48:35.147: INFO: stderr: ""
Dec 18 14:48:35.147: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 14:48:35.147: INFO: validating pod update-demo-nautilus-dlr2t
Dec 18 14:48:35.157: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 14:48:35.157: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 14:48:35.157: INFO: update-demo-nautilus-dlr2t is verified up and running
Dec 18 14:48:35.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nl92l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4522'
Dec 18 14:48:35.263: INFO: stderr: ""
Dec 18 14:48:35.263: INFO: stdout: "true"
Dec 18 14:48:35.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nl92l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4522'
Dec 18 14:48:35.402: INFO: stderr: ""
Dec 18 14:48:35.402: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 14:48:35.402: INFO: validating pod update-demo-nautilus-nl92l
Dec 18 14:48:35.411: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 14:48:35.411: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 14:48:35.411: INFO: update-demo-nautilus-nl92l is verified up and running
STEP: using delete to clean up resources
Dec 18 14:48:35.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4522'
Dec 18 14:48:35.577: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 18 14:48:35.578: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 18 14:48:35.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4522'
Dec 18 14:48:35.867: INFO: stderr: "No resources found.\n"
Dec 18 14:48:35.867: INFO: stdout: ""
Dec 18 14:48:35.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4522 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 18 14:48:37.532: INFO: stderr: ""
Dec 18 14:48:37.532: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:48:37.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4522" for this suite.
Dec 18 14:49:03.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:49:03.937: INFO: namespace kubectl-4522 deletion completed in 26.392332796s

• [SLOW TEST:76.524 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:49:03.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 18 14:49:04.260: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6135,SelfLink:/api/v1/namespaces/watch-6135/configmaps/e2e-watch-test-resource-version,UID:3e1bfc71-957e-43b2-9040-04b6d223c3d8,ResourceVersion:17152370,Generation:0,CreationTimestamp:2019-12-18 14:49:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 18 14:49:04.261: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6135,SelfLink:/api/v1/namespaces/watch-6135/configmaps/e2e-watch-test-resource-version,UID:3e1bfc71-957e-43b2-9040-04b6d223c3d8,ResourceVersion:17152371,Generation:0,CreationTimestamp:2019-12-18 14:49:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:49:04.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6135" for this suite.
Dec 18 14:49:10.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:49:10.465: INFO: namespace watch-6135 deletion completed in 6.199389126s

• [SLOW TEST:6.525 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:49:10.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 18 14:49:10.623: INFO: Waiting up to 5m0s for pod "pod-ce0dd40f-47d9-45ae-834b-e294326c88d8" in namespace "emptydir-6474" to be "success or failure"
Dec 18 14:49:10.698: INFO: Pod "pod-ce0dd40f-47d9-45ae-834b-e294326c88d8": Phase="Pending", Reason="", readiness=false. Elapsed: 74.32607ms
Dec 18 14:49:12.710: INFO: Pod "pod-ce0dd40f-47d9-45ae-834b-e294326c88d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086326775s
Dec 18 14:49:14.715: INFO: Pod "pod-ce0dd40f-47d9-45ae-834b-e294326c88d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091832824s
Dec 18 14:49:16.741: INFO: Pod "pod-ce0dd40f-47d9-45ae-834b-e294326c88d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117934872s
Dec 18 14:49:19.367: INFO: Pod "pod-ce0dd40f-47d9-45ae-834b-e294326c88d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.743092247s
Dec 18 14:49:21.381: INFO: Pod "pod-ce0dd40f-47d9-45ae-834b-e294326c88d8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.757449508s
Dec 18 14:49:23.394: INFO: Pod "pod-ce0dd40f-47d9-45ae-834b-e294326c88d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.770656098s
STEP: Saw pod success
Dec 18 14:49:23.394: INFO: Pod "pod-ce0dd40f-47d9-45ae-834b-e294326c88d8" satisfied condition "success or failure"
Dec 18 14:49:23.399: INFO: Trying to get logs from node iruya-node pod pod-ce0dd40f-47d9-45ae-834b-e294326c88d8 container test-container: 
STEP: delete the pod
Dec 18 14:49:23.536: INFO: Waiting for pod pod-ce0dd40f-47d9-45ae-834b-e294326c88d8 to disappear
Dec 18 14:49:23.542: INFO: Pod pod-ce0dd40f-47d9-45ae-834b-e294326c88d8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:49:23.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6474" for this suite.
Dec 18 14:49:29.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:49:29.746: INFO: namespace emptydir-6474 deletion completed in 6.197098306s

• [SLOW TEST:19.280 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:49:29.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-056585d0-776e-41ee-a6bf-bac599a5b1da
STEP: Creating a pod to test consume configMaps
Dec 18 14:49:29.911: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-74086238-95d9-4f2e-972c-8ff81442b4ba" in namespace "projected-3047" to be "success or failure"
Dec 18 14:49:29.938: INFO: Pod "pod-projected-configmaps-74086238-95d9-4f2e-972c-8ff81442b4ba": Phase="Pending", Reason="", readiness=false. Elapsed: 27.334026ms
Dec 18 14:49:31.954: INFO: Pod "pod-projected-configmaps-74086238-95d9-4f2e-972c-8ff81442b4ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043438572s
Dec 18 14:49:33.987: INFO: Pod "pod-projected-configmaps-74086238-95d9-4f2e-972c-8ff81442b4ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076190036s
Dec 18 14:49:36.002: INFO: Pod "pod-projected-configmaps-74086238-95d9-4f2e-972c-8ff81442b4ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090939528s
Dec 18 14:49:38.011: INFO: Pod "pod-projected-configmaps-74086238-95d9-4f2e-972c-8ff81442b4ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100527807s
Dec 18 14:49:40.020: INFO: Pod "pod-projected-configmaps-74086238-95d9-4f2e-972c-8ff81442b4ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109303306s
STEP: Saw pod success
Dec 18 14:49:40.020: INFO: Pod "pod-projected-configmaps-74086238-95d9-4f2e-972c-8ff81442b4ba" satisfied condition "success or failure"
Dec 18 14:49:40.024: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-74086238-95d9-4f2e-972c-8ff81442b4ba container projected-configmap-volume-test: 
STEP: delete the pod
Dec 18 14:49:40.129: INFO: Waiting for pod pod-projected-configmaps-74086238-95d9-4f2e-972c-8ff81442b4ba to disappear
Dec 18 14:49:40.143: INFO: Pod pod-projected-configmaps-74086238-95d9-4f2e-972c-8ff81442b4ba no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:49:40.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3047" for this suite.
Dec 18 14:49:46.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:49:46.592: INFO: namespace projected-3047 deletion completed in 6.25417635s

• [SLOW TEST:16.846 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:49:46.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 18 14:49:46.759: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:49:59.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1521" for this suite.
Dec 18 14:50:05.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:50:05.353: INFO: namespace init-container-1521 deletion completed in 6.109914476s

• [SLOW TEST:18.758 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:50:05.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 14:50:05.490: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ddb7d82-601d-451b-8ed6-33b1ed4f8523" in namespace "projected-1873" to be "success or failure"
Dec 18 14:50:05.597: INFO: Pod "downwardapi-volume-7ddb7d82-601d-451b-8ed6-33b1ed4f8523": Phase="Pending", Reason="", readiness=false. Elapsed: 106.43977ms
Dec 18 14:50:07.608: INFO: Pod "downwardapi-volume-7ddb7d82-601d-451b-8ed6-33b1ed4f8523": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117687061s
Dec 18 14:50:09.616: INFO: Pod "downwardapi-volume-7ddb7d82-601d-451b-8ed6-33b1ed4f8523": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124906583s
Dec 18 14:50:11.624: INFO: Pod "downwardapi-volume-7ddb7d82-601d-451b-8ed6-33b1ed4f8523": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13327755s
Dec 18 14:50:14.193: INFO: Pod "downwardapi-volume-7ddb7d82-601d-451b-8ed6-33b1ed4f8523": Phase="Pending", Reason="", readiness=false. Elapsed: 8.702141385s
Dec 18 14:50:16.206: INFO: Pod "downwardapi-volume-7ddb7d82-601d-451b-8ed6-33b1ed4f8523": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.715598625s
STEP: Saw pod success
Dec 18 14:50:16.206: INFO: Pod "downwardapi-volume-7ddb7d82-601d-451b-8ed6-33b1ed4f8523" satisfied condition "success or failure"
Dec 18 14:50:16.214: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7ddb7d82-601d-451b-8ed6-33b1ed4f8523 container client-container: 
STEP: delete the pod
Dec 18 14:50:16.454: INFO: Waiting for pod downwardapi-volume-7ddb7d82-601d-451b-8ed6-33b1ed4f8523 to disappear
Dec 18 14:50:16.476: INFO: Pod downwardapi-volume-7ddb7d82-601d-451b-8ed6-33b1ed4f8523 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:50:16.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1873" for this suite.
Dec 18 14:50:22.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:50:22.644: INFO: namespace projected-1873 deletion completed in 6.158301432s

• [SLOW TEST:17.291 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:50:22.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-57775f85-1396-4680-8ab1-a0014a61ba09
STEP: Creating a pod to test consume configMaps
Dec 18 14:50:22.746: INFO: Waiting up to 5m0s for pod "pod-configmaps-822def61-a35c-4e91-b7e4-18b6df4889bd" in namespace "configmap-4836" to be "success or failure"
Dec 18 14:50:22.755: INFO: Pod "pod-configmaps-822def61-a35c-4e91-b7e4-18b6df4889bd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.728908ms
Dec 18 14:50:24.770: INFO: Pod "pod-configmaps-822def61-a35c-4e91-b7e4-18b6df4889bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023505086s
Dec 18 14:50:26.782: INFO: Pod "pod-configmaps-822def61-a35c-4e91-b7e4-18b6df4889bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036055372s
Dec 18 14:50:28.796: INFO: Pod "pod-configmaps-822def61-a35c-4e91-b7e4-18b6df4889bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049560302s
Dec 18 14:50:30.804: INFO: Pod "pod-configmaps-822def61-a35c-4e91-b7e4-18b6df4889bd": Phase="Running", Reason="", readiness=true. Elapsed: 8.05732252s
Dec 18 14:50:32.827: INFO: Pod "pod-configmaps-822def61-a35c-4e91-b7e4-18b6df4889bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080566788s
STEP: Saw pod success
Dec 18 14:50:32.827: INFO: Pod "pod-configmaps-822def61-a35c-4e91-b7e4-18b6df4889bd" satisfied condition "success or failure"
Dec 18 14:50:32.832: INFO: Trying to get logs from node iruya-node pod pod-configmaps-822def61-a35c-4e91-b7e4-18b6df4889bd container configmap-volume-test: 
STEP: delete the pod
Dec 18 14:50:33.158: INFO: Waiting for pod pod-configmaps-822def61-a35c-4e91-b7e4-18b6df4889bd to disappear
Dec 18 14:50:33.169: INFO: Pod pod-configmaps-822def61-a35c-4e91-b7e4-18b6df4889bd no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:50:33.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4836" for this suite.
Dec 18 14:50:39.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:50:39.431: INFO: namespace configmap-4836 deletion completed in 6.224965959s

• [SLOW TEST:16.787 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:50:39.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-9170
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 18 14:50:39.489: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 18 14:51:19.747: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-9170 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:51:19.747: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:51:20.293: INFO: Waiting for endpoints: map[]
Dec 18 14:51:20.307: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-9170 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 18 14:51:20.307: INFO: >>> kubeConfig: /root/.kube/config
Dec 18 14:51:20.986: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:51:20.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9170" for this suite.
Dec 18 14:51:37.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:51:37.289: INFO: namespace pod-network-test-9170 deletion completed in 16.287833322s

• [SLOW TEST:57.858 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:51:37.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Dec 18 14:51:37.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3808'
Dec 18 14:51:39.905: INFO: stderr: ""
Dec 18 14:51:39.905: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 18 14:51:39.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3808'
Dec 18 14:51:40.186: INFO: stderr: ""
Dec 18 14:51:40.186: INFO: stdout: "update-demo-nautilus-rz9p6 update-demo-nautilus-w64xt "
Dec 18 14:51:40.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rz9p6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3808'
Dec 18 14:51:40.357: INFO: stderr: ""
Dec 18 14:51:40.357: INFO: stdout: ""
Dec 18 14:51:40.357: INFO: update-demo-nautilus-rz9p6 is created but not running
Dec 18 14:51:45.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3808'
Dec 18 14:51:46.715: INFO: stderr: ""
Dec 18 14:51:46.716: INFO: stdout: "update-demo-nautilus-rz9p6 update-demo-nautilus-w64xt "
Dec 18 14:51:46.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rz9p6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3808'
Dec 18 14:51:47.990: INFO: stderr: ""
Dec 18 14:51:47.990: INFO: stdout: ""
Dec 18 14:51:47.990: INFO: update-demo-nautilus-rz9p6 is created but not running
Dec 18 14:51:52.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3808'
Dec 18 14:51:53.193: INFO: stderr: ""
Dec 18 14:51:53.194: INFO: stdout: "update-demo-nautilus-rz9p6 update-demo-nautilus-w64xt "
Dec 18 14:51:53.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rz9p6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3808'
Dec 18 14:51:53.299: INFO: stderr: ""
Dec 18 14:51:53.299: INFO: stdout: ""
Dec 18 14:51:53.299: INFO: update-demo-nautilus-rz9p6 is created but not running
Dec 18 14:51:58.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3808'
Dec 18 14:51:58.533: INFO: stderr: ""
Dec 18 14:51:58.533: INFO: stdout: "update-demo-nautilus-rz9p6 update-demo-nautilus-w64xt "
Dec 18 14:51:58.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rz9p6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3808'
Dec 18 14:51:58.691: INFO: stderr: ""
Dec 18 14:51:58.692: INFO: stdout: "true"
Dec 18 14:51:58.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rz9p6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3808'
Dec 18 14:51:58.898: INFO: stderr: ""
Dec 18 14:51:58.898: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 14:51:58.898: INFO: validating pod update-demo-nautilus-rz9p6
Dec 18 14:51:59.046: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 14:51:59.046: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 14:51:59.046: INFO: update-demo-nautilus-rz9p6 is verified up and running
Dec 18 14:51:59.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w64xt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3808'
Dec 18 14:51:59.181: INFO: stderr: ""
Dec 18 14:51:59.181: INFO: stdout: "true"
Dec 18 14:51:59.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w64xt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3808'
Dec 18 14:51:59.282: INFO: stderr: ""
Dec 18 14:51:59.282: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 18 14:51:59.282: INFO: validating pod update-demo-nautilus-w64xt
Dec 18 14:51:59.292: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 18 14:51:59.293: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 18 14:51:59.293: INFO: update-demo-nautilus-w64xt is verified up and running
STEP: rolling-update to new replication controller
Dec 18 14:51:59.296: INFO: scanned /root for discovery docs: 
Dec 18 14:51:59.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3808'
Dec 18 14:52:30.981: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 18 14:52:30.982: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 18 14:52:30.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3808'
Dec 18 14:52:31.135: INFO: stderr: ""
Dec 18 14:52:31.135: INFO: stdout: "update-demo-kitten-446k5 update-demo-kitten-pk4kv update-demo-nautilus-w64xt "
STEP: Replicas for name=update-demo: expected=2 actual=3
Dec 18 14:52:36.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3808'
Dec 18 14:52:36.293: INFO: stderr: ""
Dec 18 14:52:36.293: INFO: stdout: "update-demo-kitten-446k5 update-demo-kitten-pk4kv update-demo-nautilus-w64xt "
STEP: Replicas for name=update-demo: expected=2 actual=3
Dec 18 14:52:41.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3808'
Dec 18 14:52:41.478: INFO: stderr: ""
Dec 18 14:52:41.478: INFO: stdout: "update-demo-kitten-446k5 update-demo-kitten-pk4kv "
Dec 18 14:52:41.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-446k5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3808'
Dec 18 14:52:41.630: INFO: stderr: ""
Dec 18 14:52:41.630: INFO: stdout: "true"
Dec 18 14:52:41.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-446k5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3808'
Dec 18 14:52:41.836: INFO: stderr: ""
Dec 18 14:52:41.836: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 18 14:52:41.836: INFO: validating pod update-demo-kitten-446k5
Dec 18 14:52:41.864: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 18 14:52:41.864: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 18 14:52:41.864: INFO: update-demo-kitten-446k5 is verified up and running
Dec 18 14:52:41.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pk4kv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3808'
Dec 18 14:52:41.981: INFO: stderr: ""
Dec 18 14:52:41.981: INFO: stdout: "true"
Dec 18 14:52:41.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pk4kv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3808'
Dec 18 14:52:42.088: INFO: stderr: ""
Dec 18 14:52:42.089: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 18 14:52:42.089: INFO: validating pod update-demo-kitten-pk4kv
Dec 18 14:52:42.205: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 18 14:52:42.206: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 18 14:52:42.206: INFO: update-demo-kitten-pk4kv is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:52:42.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3808" for this suite.
Dec 18 14:53:12.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:53:12.349: INFO: namespace kubectl-3808 deletion completed in 30.137798864s

• [SLOW TEST:95.059 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:53:12.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1218 14:53:15.996313       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 18 14:53:15.996: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:53:15.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-844" for this suite.
Dec 18 14:53:22.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:53:22.667: INFO: namespace gc-844 deletion completed in 6.66651017s

• [SLOW TEST:10.318 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:53:22.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-8726
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8726 to expose endpoints map[]
Dec 18 14:53:22.924: INFO: Get endpoints failed (110.527027ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 18 14:53:23.938: INFO: successfully validated that service endpoint-test2 in namespace services-8726 exposes endpoints map[] (1.123922175s elapsed)
STEP: Creating pod pod1 in namespace services-8726
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8726 to expose endpoints map[pod1:[80]]
Dec 18 14:53:28.049: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.094613532s elapsed, will retry)
Dec 18 14:53:33.203: INFO: successfully validated that service endpoint-test2 in namespace services-8726 exposes endpoints map[pod1:[80]] (9.248506111s elapsed)
STEP: Creating pod pod2 in namespace services-8726
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8726 to expose endpoints map[pod1:[80] pod2:[80]]
Dec 18 14:53:38.501: INFO: Unexpected endpoints: found map[4544efa1-7502-46a9-943e-f090481eced5:[80]], expected map[pod1:[80] pod2:[80]] (5.255638712s elapsed, will retry)
Dec 18 14:53:41.729: INFO: successfully validated that service endpoint-test2 in namespace services-8726 exposes endpoints map[pod1:[80] pod2:[80]] (8.483701193s elapsed)
STEP: Deleting pod pod1 in namespace services-8726
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8726 to expose endpoints map[pod2:[80]]
Dec 18 14:53:42.791: INFO: successfully validated that service endpoint-test2 in namespace services-8726 exposes endpoints map[pod2:[80]] (1.054812261s elapsed)
STEP: Deleting pod pod2 in namespace services-8726
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8726 to expose endpoints map[]
Dec 18 14:53:43.832: INFO: successfully validated that service endpoint-test2 in namespace services-8726 exposes endpoints map[] (1.024180976s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:53:48.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8726" for this suite.
Dec 18 14:53:54.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:53:54.663: INFO: namespace services-8726 deletion completed in 6.171763806s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:31.995 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:53:54.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 18 14:54:10.897: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 18 14:54:10.908: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 18 14:54:12.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 18 14:54:12.924: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 18 14:54:14.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 18 14:54:14.918: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 18 14:54:16.909: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 18 14:54:16.924: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 18 14:54:18.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 18 14:54:18.942: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 18 14:54:20.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 18 14:54:21.306: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 18 14:54:22.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 18 14:54:22.916: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 18 14:54:24.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 18 14:54:24.919: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 18 14:54:26.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 18 14:54:28.819: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 18 14:54:28.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 18 14:54:29.037: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 18 14:54:30.909: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 18 14:54:30.924: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 18 14:54:32.913: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 18 14:54:32.924: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 18 14:54:34.909: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 18 14:54:34.923: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 18 14:54:36.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 18 14:54:38.232: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 18 14:54:38.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 18 14:54:38.913: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:54:38.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7952" for this suite.
Dec 18 14:55:01.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:55:01.140: INFO: namespace container-lifecycle-hook-7952 deletion completed in 22.174351756s

• [SLOW TEST:66.476 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:55:01.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6074.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6074.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6074.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6074.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6074.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6074.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6074.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6074.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6074.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6074.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6074.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6074.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6074.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 71.235.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.235.71_udp@PTR;check="$$(dig +tcp +noall +answer +search 71.235.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.235.71_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6074.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6074.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6074.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6074.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6074.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6074.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6074.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6074.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6074.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6074.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6074.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6074.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6074.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 71.235.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.235.71_udp@PTR;check="$$(dig +tcp +noall +answer +search 71.235.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.235.71_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 18 14:55:15.481: INFO: Unable to read wheezy_udp@dns-test-service.dns-6074.svc.cluster.local from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.490: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6074.svc.cluster.local from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.495: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6074.svc.cluster.local from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.500: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6074.svc.cluster.local from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.505: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-6074.svc.cluster.local from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.511: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-6074.svc.cluster.local from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.517: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.521: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.526: INFO: Unable to read 10.104.235.71_udp@PTR from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.530: INFO: Unable to read 10.104.235.71_tcp@PTR from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.533: INFO: Unable to read jessie_udp@dns-test-service.dns-6074.svc.cluster.local from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.537: INFO: Unable to read jessie_tcp@dns-test-service.dns-6074.svc.cluster.local from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.540: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6074.svc.cluster.local from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.543: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6074.svc.cluster.local from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.546: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-6074.svc.cluster.local from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.550: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-6074.svc.cluster.local from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.554: INFO: Unable to read jessie_udp@PodARecord from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.557: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.561: INFO: Unable to read 10.104.235.71_udp@PTR from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.564: INFO: Unable to read 10.104.235.71_tcp@PTR from pod dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6: the server could not find the requested resource (get pods dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6)
Dec 18 14:55:15.564: INFO: Lookups using dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6 failed for: [wheezy_udp@dns-test-service.dns-6074.svc.cluster.local wheezy_tcp@dns-test-service.dns-6074.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6074.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6074.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-6074.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-6074.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.104.235.71_udp@PTR 10.104.235.71_tcp@PTR jessie_udp@dns-test-service.dns-6074.svc.cluster.local jessie_tcp@dns-test-service.dns-6074.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6074.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6074.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-6074.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-6074.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.104.235.71_udp@PTR 10.104.235.71_tcp@PTR]

Dec 18 14:55:20.727: INFO: DNS probes using dns-6074/dns-test-34d0173b-cd51-45ed-a352-27a89ee2a0b6 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:55:21.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6074" for this suite.
Dec 18 14:55:27.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:55:27.485: INFO: namespace dns-6074 deletion completed in 6.159720076s

• [SLOW TEST:26.344 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:55:27.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-f8adf611-9b78-42be-9a9f-e4664b2488a5
STEP: Creating a pod to test consume secrets
Dec 18 14:55:27.586: INFO: Waiting up to 5m0s for pod "pod-secrets-5b8b0b4d-17ef-47ae-90a5-25373c9a2c40" in namespace "secrets-4961" to be "success or failure"
Dec 18 14:55:27.594: INFO: Pod "pod-secrets-5b8b0b4d-17ef-47ae-90a5-25373c9a2c40": Phase="Pending", Reason="", readiness=false. Elapsed: 7.774923ms
Dec 18 14:55:29.602: INFO: Pod "pod-secrets-5b8b0b4d-17ef-47ae-90a5-25373c9a2c40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015411015s
Dec 18 14:55:31.660: INFO: Pod "pod-secrets-5b8b0b4d-17ef-47ae-90a5-25373c9a2c40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073385207s
Dec 18 14:55:33.672: INFO: Pod "pod-secrets-5b8b0b4d-17ef-47ae-90a5-25373c9a2c40": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086085273s
Dec 18 14:55:35.682: INFO: Pod "pod-secrets-5b8b0b4d-17ef-47ae-90a5-25373c9a2c40": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096042547s
Dec 18 14:55:37.696: INFO: Pod "pod-secrets-5b8b0b4d-17ef-47ae-90a5-25373c9a2c40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109514057s
STEP: Saw pod success
Dec 18 14:55:37.696: INFO: Pod "pod-secrets-5b8b0b4d-17ef-47ae-90a5-25373c9a2c40" satisfied condition "success or failure"
Dec 18 14:55:37.702: INFO: Trying to get logs from node iruya-node pod pod-secrets-5b8b0b4d-17ef-47ae-90a5-25373c9a2c40 container secret-volume-test: 
STEP: delete the pod
Dec 18 14:55:37.767: INFO: Waiting for pod pod-secrets-5b8b0b4d-17ef-47ae-90a5-25373c9a2c40 to disappear
Dec 18 14:55:37.782: INFO: Pod pod-secrets-5b8b0b4d-17ef-47ae-90a5-25373c9a2c40 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:55:37.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4961" for this suite.
Dec 18 14:55:43.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:55:44.000: INFO: namespace secrets-4961 deletion completed in 6.202021459s

• [SLOW TEST:16.515 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:55:44.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 18 14:55:44.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5967'
Dec 18 14:55:44.219: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 18 14:55:44.220: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 18 14:55:44.271: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 18 14:55:44.314: INFO: scanned /root for discovery docs: 
Dec 18 14:55:44.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5967'
Dec 18 14:56:07.571: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 18 14:56:07.571: INFO: stdout: "Created e2e-test-nginx-rc-1d6f8930065925321251bbe003c3114e\nScaling up e2e-test-nginx-rc-1d6f8930065925321251bbe003c3114e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-1d6f8930065925321251bbe003c3114e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-1d6f8930065925321251bbe003c3114e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 18 14:56:07.571: INFO: stdout: "Created e2e-test-nginx-rc-1d6f8930065925321251bbe003c3114e\nScaling up e2e-test-nginx-rc-1d6f8930065925321251bbe003c3114e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-1d6f8930065925321251bbe003c3114e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-1d6f8930065925321251bbe003c3114e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 18 14:56:07.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5967'
Dec 18 14:56:07.724: INFO: stderr: ""
Dec 18 14:56:07.724: INFO: stdout: "e2e-test-nginx-rc-1d6f8930065925321251bbe003c3114e-xdx6v "
Dec 18 14:56:07.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-1d6f8930065925321251bbe003c3114e-xdx6v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5967'
Dec 18 14:56:07.936: INFO: stderr: ""
Dec 18 14:56:07.937: INFO: stdout: "true"
Dec 18 14:56:07.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-1d6f8930065925321251bbe003c3114e-xdx6v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5967'
Dec 18 14:56:08.114: INFO: stderr: ""
Dec 18 14:56:08.114: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 18 14:56:08.114: INFO: e2e-test-nginx-rc-1d6f8930065925321251bbe003c3114e-xdx6v is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Dec 18 14:56:08.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5967'
Dec 18 14:56:08.300: INFO: stderr: ""
Dec 18 14:56:08.301: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:56:08.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5967" for this suite.
Dec 18 14:56:30.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:56:30.497: INFO: namespace kubectl-5967 deletion completed in 22.139292137s

• [SLOW TEST:46.495 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:56:30.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 18 14:56:30.650: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6d79402a-f90b-4456-b0dc-9f31929c5e9a" in namespace "projected-9176" to be "success or failure"
Dec 18 14:56:30.660: INFO: Pod "downwardapi-volume-6d79402a-f90b-4456-b0dc-9f31929c5e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.572997ms
Dec 18 14:56:32.675: INFO: Pod "downwardapi-volume-6d79402a-f90b-4456-b0dc-9f31929c5e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025458543s
Dec 18 14:56:34.682: INFO: Pod "downwardapi-volume-6d79402a-f90b-4456-b0dc-9f31929c5e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031702233s
Dec 18 14:56:36.692: INFO: Pod "downwardapi-volume-6d79402a-f90b-4456-b0dc-9f31929c5e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041972939s
Dec 18 14:56:38.702: INFO: Pod "downwardapi-volume-6d79402a-f90b-4456-b0dc-9f31929c5e9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052394277s
STEP: Saw pod success
Dec 18 14:56:38.702: INFO: Pod "downwardapi-volume-6d79402a-f90b-4456-b0dc-9f31929c5e9a" satisfied condition "success or failure"
Dec 18 14:56:38.706: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6d79402a-f90b-4456-b0dc-9f31929c5e9a container client-container: 
STEP: delete the pod
Dec 18 14:56:38.786: INFO: Waiting for pod downwardapi-volume-6d79402a-f90b-4456-b0dc-9f31929c5e9a to disappear
Dec 18 14:56:38.791: INFO: Pod downwardapi-volume-6d79402a-f90b-4456-b0dc-9f31929c5e9a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:56:38.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9176" for this suite.
Dec 18 14:56:44.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:56:44.942: INFO: namespace projected-9176 deletion completed in 6.145987514s

• [SLOW TEST:14.443 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:56:44.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 18 14:56:45.047: INFO: Waiting up to 5m0s for pod "downward-api-32d07c7c-d25b-47c8-9189-987102215ae8" in namespace "downward-api-3623" to be "success or failure"
Dec 18 14:56:45.084: INFO: Pod "downward-api-32d07c7c-d25b-47c8-9189-987102215ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 36.869514ms
Dec 18 14:56:47.091: INFO: Pod "downward-api-32d07c7c-d25b-47c8-9189-987102215ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044329687s
Dec 18 14:56:49.101: INFO: Pod "downward-api-32d07c7c-d25b-47c8-9189-987102215ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054488619s
Dec 18 14:56:51.110: INFO: Pod "downward-api-32d07c7c-d25b-47c8-9189-987102215ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063763818s
Dec 18 14:56:53.121: INFO: Pod "downward-api-32d07c7c-d25b-47c8-9189-987102215ae8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074685574s
Dec 18 14:56:55.134: INFO: Pod "downward-api-32d07c7c-d25b-47c8-9189-987102215ae8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08766833s
STEP: Saw pod success
Dec 18 14:56:55.135: INFO: Pod "downward-api-32d07c7c-d25b-47c8-9189-987102215ae8" satisfied condition "success or failure"
Dec 18 14:56:55.140: INFO: Trying to get logs from node iruya-node pod downward-api-32d07c7c-d25b-47c8-9189-987102215ae8 container dapi-container: 
STEP: delete the pod
Dec 18 14:56:55.471: INFO: Waiting for pod downward-api-32d07c7c-d25b-47c8-9189-987102215ae8 to disappear
Dec 18 14:56:55.535: INFO: Pod downward-api-32d07c7c-d25b-47c8-9189-987102215ae8 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:56:55.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3623" for this suite.
Dec 18 14:57:01.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:57:01.672: INFO: namespace downward-api-3623 deletion completed in 6.124876028s

• [SLOW TEST:16.728 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:57:01.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-e3415751-a961-43f7-a45c-fc7ce2ed2f76
STEP: Creating a pod to test consume configMaps
Dec 18 14:57:01.788: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2b28e03a-f59a-421b-a7e0-d71cb90ae0c9" in namespace "projected-3482" to be "success or failure"
Dec 18 14:57:01.820: INFO: Pod "pod-projected-configmaps-2b28e03a-f59a-421b-a7e0-d71cb90ae0c9": Phase="Pending", Reason="", readiness=false. Elapsed: 31.185549ms
Dec 18 14:57:03.837: INFO: Pod "pod-projected-configmaps-2b28e03a-f59a-421b-a7e0-d71cb90ae0c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048477912s
Dec 18 14:57:05.849: INFO: Pod "pod-projected-configmaps-2b28e03a-f59a-421b-a7e0-d71cb90ae0c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060670528s
Dec 18 14:57:07.866: INFO: Pod "pod-projected-configmaps-2b28e03a-f59a-421b-a7e0-d71cb90ae0c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077922538s
Dec 18 14:57:09.936: INFO: Pod "pod-projected-configmaps-2b28e03a-f59a-421b-a7e0-d71cb90ae0c9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147431744s
Dec 18 14:57:12.064: INFO: Pod "pod-projected-configmaps-2b28e03a-f59a-421b-a7e0-d71cb90ae0c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.275089286s
STEP: Saw pod success
Dec 18 14:57:12.064: INFO: Pod "pod-projected-configmaps-2b28e03a-f59a-421b-a7e0-d71cb90ae0c9" satisfied condition "success or failure"
Dec 18 14:57:12.084: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2b28e03a-f59a-421b-a7e0-d71cb90ae0c9 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 18 14:57:12.141: INFO: Waiting for pod pod-projected-configmaps-2b28e03a-f59a-421b-a7e0-d71cb90ae0c9 to disappear
Dec 18 14:57:12.145: INFO: Pod pod-projected-configmaps-2b28e03a-f59a-421b-a7e0-d71cb90ae0c9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:57:12.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3482" for this suite.
Dec 18 14:57:18.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:57:19.161: INFO: namespace projected-3482 deletion completed in 7.011904387s

• [SLOW TEST:17.488 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:57:19.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 18 14:57:19.245: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:57:33.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1990" for this suite.
Dec 18 14:57:40.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:57:40.291: INFO: namespace init-container-1990 deletion completed in 6.257046829s

• [SLOW TEST:21.130 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:57:40.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-af3cb0be-c580-4d6d-a74d-7e7beaa49921
STEP: Creating a pod to test consume secrets
Dec 18 14:57:40.481: INFO: Waiting up to 5m0s for pod "pod-secrets-0f77b9fc-6cda-47aa-8a82-5c7c7ac74dfe" in namespace "secrets-2835" to be "success or failure"
Dec 18 14:57:40.507: INFO: Pod "pod-secrets-0f77b9fc-6cda-47aa-8a82-5c7c7ac74dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 24.867182ms
Dec 18 14:57:42.521: INFO: Pod "pod-secrets-0f77b9fc-6cda-47aa-8a82-5c7c7ac74dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039223936s
Dec 18 14:57:44.539: INFO: Pod "pod-secrets-0f77b9fc-6cda-47aa-8a82-5c7c7ac74dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057426202s
Dec 18 14:57:46.558: INFO: Pod "pod-secrets-0f77b9fc-6cda-47aa-8a82-5c7c7ac74dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076396912s
Dec 18 14:57:48.577: INFO: Pod "pod-secrets-0f77b9fc-6cda-47aa-8a82-5c7c7ac74dfe": Phase="Running", Reason="", readiness=true. Elapsed: 8.095395927s
Dec 18 14:57:50.593: INFO: Pod "pod-secrets-0f77b9fc-6cda-47aa-8a82-5c7c7ac74dfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111209755s
STEP: Saw pod success
Dec 18 14:57:50.593: INFO: Pod "pod-secrets-0f77b9fc-6cda-47aa-8a82-5c7c7ac74dfe" satisfied condition "success or failure"
Dec 18 14:57:50.601: INFO: Trying to get logs from node iruya-node pod pod-secrets-0f77b9fc-6cda-47aa-8a82-5c7c7ac74dfe container secret-volume-test: 
STEP: delete the pod
Dec 18 14:57:50.741: INFO: Waiting for pod pod-secrets-0f77b9fc-6cda-47aa-8a82-5c7c7ac74dfe to disappear
Dec 18 14:57:50.749: INFO: Pod pod-secrets-0f77b9fc-6cda-47aa-8a82-5c7c7ac74dfe no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:57:50.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2835" for this suite.
Dec 18 14:57:56.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:57:56.928: INFO: namespace secrets-2835 deletion completed in 6.155190189s

• [SLOW TEST:16.636 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:57:56.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-2bfdb3d6-a693-4456-aa87-ecb5e6c9f2b2
STEP: Creating configMap with name cm-test-opt-upd-befa3b24-b708-4c16-a73b-4982ef829f13
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-2bfdb3d6-a693-4456-aa87-ecb5e6c9f2b2
STEP: Updating configmap cm-test-opt-upd-befa3b24-b708-4c16-a73b-4982ef829f13
STEP: Creating configMap with name cm-test-opt-create-21016685-4436-491b-8dd0-e53d1c80d1de
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:58:15.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9391" for this suite.
Dec 18 14:58:39.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:58:39.541: INFO: namespace configmap-9391 deletion completed in 24.188515682s

• [SLOW TEST:42.612 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:58:39.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-6e91ac26-31c3-4c2c-b18d-ffeea82ca56e
STEP: Creating a pod to test consume secrets
Dec 18 14:58:39.805: INFO: Waiting up to 5m0s for pod "pod-secrets-0699bd99-d96b-4c90-9804-9116d39e35eb" in namespace "secrets-4531" to be "success or failure"
Dec 18 14:58:39.904: INFO: Pod "pod-secrets-0699bd99-d96b-4c90-9804-9116d39e35eb": Phase="Pending", Reason="", readiness=false. Elapsed: 98.286049ms
Dec 18 14:58:41.914: INFO: Pod "pod-secrets-0699bd99-d96b-4c90-9804-9116d39e35eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108276968s
Dec 18 14:58:43.926: INFO: Pod "pod-secrets-0699bd99-d96b-4c90-9804-9116d39e35eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120068631s
Dec 18 14:58:45.935: INFO: Pod "pod-secrets-0699bd99-d96b-4c90-9804-9116d39e35eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128977077s
Dec 18 14:58:47.941: INFO: Pod "pod-secrets-0699bd99-d96b-4c90-9804-9116d39e35eb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135764383s
Dec 18 14:58:49.952: INFO: Pod "pod-secrets-0699bd99-d96b-4c90-9804-9116d39e35eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.146752994s
STEP: Saw pod success
Dec 18 14:58:49.953: INFO: Pod "pod-secrets-0699bd99-d96b-4c90-9804-9116d39e35eb" satisfied condition "success or failure"
Dec 18 14:58:49.958: INFO: Trying to get logs from node iruya-node pod pod-secrets-0699bd99-d96b-4c90-9804-9116d39e35eb container secret-volume-test: 
STEP: delete the pod
Dec 18 14:58:50.019: INFO: Waiting for pod pod-secrets-0699bd99-d96b-4c90-9804-9116d39e35eb to disappear
Dec 18 14:58:50.130: INFO: Pod pod-secrets-0699bd99-d96b-4c90-9804-9116d39e35eb no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:58:50.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4531" for this suite.
Dec 18 14:58:56.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:58:56.319: INFO: namespace secrets-4531 deletion completed in 6.175140182s

• [SLOW TEST:16.777 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:58:56.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2436
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-2436
STEP: Creating statefulset with conflicting port in namespace statefulset-2436
STEP: Waiting until pod test-pod will start running in namespace statefulset-2436
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2436
Dec 18 14:59:06.526: INFO: Observed stateful pod in namespace: statefulset-2436, name: ss-0, uid: 7bdc24f4-2af6-46bd-87a8-5354158293ea, status phase: Failed. Waiting for statefulset controller to delete.
Dec 18 14:59:06.560: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2436
STEP: Removing pod with conflicting port in namespace statefulset-2436
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2436 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 18 14:59:16.989: INFO: Deleting all statefulset in ns statefulset-2436
Dec 18 14:59:16.997: INFO: Scaling statefulset ss to 0
Dec 18 14:59:27.063: INFO: Waiting for statefulset status.replicas updated to 0
Dec 18 14:59:27.067: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:59:27.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2436" for this suite.
Dec 18 14:59:33.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:59:33.594: INFO: namespace statefulset-2436 deletion completed in 6.46170356s

• [SLOW TEST:37.274 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:59:33.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 18 14:59:34.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-5521'
Dec 18 14:59:34.351: INFO: stderr: ""
Dec 18 14:59:34.351: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 18 14:59:44.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-5521 -o json'
Dec 18 14:59:44.588: INFO: stderr: ""
Dec 18 14:59:44.589: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-18T14:59:34Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-5521\",\n        \"resourceVersion\": \"17154128\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-5521/pods/e2e-test-nginx-pod\",\n        \"uid\": \"e5e89c2d-321f-4145-94c2-d598ae5f0d53\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-2pjxb\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-2pjxb\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-2pjxb\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-18T14:59:34Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-18T14:59:42Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-18T14:59:42Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-18T14:59:34Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://7f4f5a213ab396b9e7edb9f210abb3c2b84a69c7733c9ac66fef391fd8a178a1\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-18T14:59:41Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-18T14:59:34Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 18 14:59:44.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5521'
Dec 18 14:59:45.177: INFO: stderr: ""
Dec 18 14:59:45.177: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Dec 18 14:59:45.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5521'
Dec 18 14:59:53.067: INFO: stderr: ""
Dec 18 14:59:53.067: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 14:59:53.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5521" for this suite.
Dec 18 14:59:59.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 14:59:59.262: INFO: namespace kubectl-5521 deletion completed in 6.182095179s

• [SLOW TEST:25.667 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 14:59:59.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Dec 18 14:59:59.433: INFO: Waiting up to 5m0s for pod "client-containers-93811968-9254-4faa-9209-eaddd8c07b1a" in namespace "containers-4040" to be "success or failure"
Dec 18 14:59:59.450: INFO: Pod "client-containers-93811968-9254-4faa-9209-eaddd8c07b1a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.174149ms
Dec 18 15:00:01.458: INFO: Pod "client-containers-93811968-9254-4faa-9209-eaddd8c07b1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024981686s
Dec 18 15:00:03.474: INFO: Pod "client-containers-93811968-9254-4faa-9209-eaddd8c07b1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040707096s
Dec 18 15:00:05.490: INFO: Pod "client-containers-93811968-9254-4faa-9209-eaddd8c07b1a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056554097s
Dec 18 15:00:07.500: INFO: Pod "client-containers-93811968-9254-4faa-9209-eaddd8c07b1a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066665467s
Dec 18 15:00:09.512: INFO: Pod "client-containers-93811968-9254-4faa-9209-eaddd8c07b1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078776412s
STEP: Saw pod success
Dec 18 15:00:09.512: INFO: Pod "client-containers-93811968-9254-4faa-9209-eaddd8c07b1a" satisfied condition "success or failure"
Dec 18 15:00:09.516: INFO: Trying to get logs from node iruya-node pod client-containers-93811968-9254-4faa-9209-eaddd8c07b1a container test-container: 
STEP: delete the pod
Dec 18 15:00:09.626: INFO: Waiting for pod client-containers-93811968-9254-4faa-9209-eaddd8c07b1a to disappear
Dec 18 15:00:09.631: INFO: Pod client-containers-93811968-9254-4faa-9209-eaddd8c07b1a no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:00:09.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4040" for this suite.
Dec 18 15:00:15.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:00:15.830: INFO: namespace containers-4040 deletion completed in 6.193451726s

• [SLOW TEST:16.568 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:00:15.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 18 15:00:24.571: INFO: Successfully updated pod "labelsupdate0ad1b6ba-49ab-4fbd-9030-cafd6ec2b664"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:00:26.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8297" for this suite.
Dec 18 15:00:48.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:00:48.824: INFO: namespace projected-8297 deletion completed in 22.136349779s

• [SLOW TEST:32.992 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:00:48.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 18 15:00:48.911: INFO: Waiting up to 5m0s for pod "pod-9b39f19f-98fd-48c1-88b2-3ef9f32efe0f" in namespace "emptydir-4692" to be "success or failure"
Dec 18 15:00:48.917: INFO: Pod "pod-9b39f19f-98fd-48c1-88b2-3ef9f32efe0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223082ms
Dec 18 15:00:50.923: INFO: Pod "pod-9b39f19f-98fd-48c1-88b2-3ef9f32efe0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012266854s
Dec 18 15:00:52.937: INFO: Pod "pod-9b39f19f-98fd-48c1-88b2-3ef9f32efe0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026038435s
Dec 18 15:00:54.958: INFO: Pod "pod-9b39f19f-98fd-48c1-88b2-3ef9f32efe0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046583471s
Dec 18 15:00:56.966: INFO: Pod "pod-9b39f19f-98fd-48c1-88b2-3ef9f32efe0f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054920234s
Dec 18 15:00:58.978: INFO: Pod "pod-9b39f19f-98fd-48c1-88b2-3ef9f32efe0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06689418s
STEP: Saw pod success
Dec 18 15:00:58.979: INFO: Pod "pod-9b39f19f-98fd-48c1-88b2-3ef9f32efe0f" satisfied condition "success or failure"
Dec 18 15:00:58.996: INFO: Trying to get logs from node iruya-node pod pod-9b39f19f-98fd-48c1-88b2-3ef9f32efe0f container test-container: 
STEP: delete the pod
Dec 18 15:00:59.073: INFO: Waiting for pod pod-9b39f19f-98fd-48c1-88b2-3ef9f32efe0f to disappear
Dec 18 15:00:59.173: INFO: Pod pod-9b39f19f-98fd-48c1-88b2-3ef9f32efe0f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:00:59.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4692" for this suite.
Dec 18 15:01:05.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:01:05.420: INFO: namespace emptydir-4692 deletion completed in 6.233774302s

• [SLOW TEST:16.596 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:01:05.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-6994407e-12d9-4929-b1f8-2e81fc925cce
STEP: Creating a pod to test consume secrets
Dec 18 15:01:05.578: INFO: Waiting up to 5m0s for pod "pod-secrets-7007c676-8e01-42c2-af5e-2db25ed74bd5" in namespace "secrets-4669" to be "success or failure"
Dec 18 15:01:05.594: INFO: Pod "pod-secrets-7007c676-8e01-42c2-af5e-2db25ed74bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.99278ms
Dec 18 15:01:07.612: INFO: Pod "pod-secrets-7007c676-8e01-42c2-af5e-2db25ed74bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03410076s
Dec 18 15:01:09.668: INFO: Pod "pod-secrets-7007c676-8e01-42c2-af5e-2db25ed74bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089803422s
Dec 18 15:01:11.687: INFO: Pod "pod-secrets-7007c676-8e01-42c2-af5e-2db25ed74bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10923742s
Dec 18 15:01:13.716: INFO: Pod "pod-secrets-7007c676-8e01-42c2-af5e-2db25ed74bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138553358s
Dec 18 15:01:15.736: INFO: Pod "pod-secrets-7007c676-8e01-42c2-af5e-2db25ed74bd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.157819733s
STEP: Saw pod success
Dec 18 15:01:15.736: INFO: Pod "pod-secrets-7007c676-8e01-42c2-af5e-2db25ed74bd5" satisfied condition "success or failure"
Dec 18 15:01:15.747: INFO: Trying to get logs from node iruya-node pod pod-secrets-7007c676-8e01-42c2-af5e-2db25ed74bd5 container secret-volume-test: 
STEP: delete the pod
Dec 18 15:01:15.916: INFO: Waiting for pod pod-secrets-7007c676-8e01-42c2-af5e-2db25ed74bd5 to disappear
Dec 18 15:01:15.984: INFO: Pod pod-secrets-7007c676-8e01-42c2-af5e-2db25ed74bd5 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:01:15.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4669" for this suite.
Dec 18 15:01:22.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:01:22.183: INFO: namespace secrets-4669 deletion completed in 6.178104522s

• [SLOW TEST:16.763 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:01:22.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 18 15:01:31.482: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:01:31.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1550" for this suite.
Dec 18 15:01:53.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:01:53.940: INFO: namespace replicaset-1550 deletion completed in 22.320571031s

• [SLOW TEST:31.757 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:01:53.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 15:01:54.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:02:04.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3343" for this suite.
Dec 18 15:02:48.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:02:48.348: INFO: namespace pods-3343 deletion completed in 44.184527378s

• [SLOW TEST:54.406 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:02:48.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 18 15:02:48.512: INFO: Waiting up to 5m0s for pod "pod-3ba58573-1beb-4d4c-b5e4-39d2b7d57b60" in namespace "emptydir-5124" to be "success or failure"
Dec 18 15:02:48.568: INFO: Pod "pod-3ba58573-1beb-4d4c-b5e4-39d2b7d57b60": Phase="Pending", Reason="", readiness=false. Elapsed: 55.564929ms
Dec 18 15:02:50.588: INFO: Pod "pod-3ba58573-1beb-4d4c-b5e4-39d2b7d57b60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075367984s
Dec 18 15:02:52.610: INFO: Pod "pod-3ba58573-1beb-4d4c-b5e4-39d2b7d57b60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097998273s
Dec 18 15:02:54.627: INFO: Pod "pod-3ba58573-1beb-4d4c-b5e4-39d2b7d57b60": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114546219s
Dec 18 15:02:56.644: INFO: Pod "pod-3ba58573-1beb-4d4c-b5e4-39d2b7d57b60": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131265097s
Dec 18 15:02:58.651: INFO: Pod "pod-3ba58573-1beb-4d4c-b5e4-39d2b7d57b60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.139026799s
STEP: Saw pod success
Dec 18 15:02:58.652: INFO: Pod "pod-3ba58573-1beb-4d4c-b5e4-39d2b7d57b60" satisfied condition "success or failure"
Dec 18 15:02:58.655: INFO: Trying to get logs from node iruya-node pod pod-3ba58573-1beb-4d4c-b5e4-39d2b7d57b60 container test-container: 
STEP: delete the pod
Dec 18 15:02:58.738: INFO: Waiting for pod pod-3ba58573-1beb-4d4c-b5e4-39d2b7d57b60 to disappear
Dec 18 15:02:58.760: INFO: Pod pod-3ba58573-1beb-4d4c-b5e4-39d2b7d57b60 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:02:58.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5124" for this suite.
Dec 18 15:03:04.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:03:05.051: INFO: namespace emptydir-5124 deletion completed in 6.162688079s

• [SLOW TEST:16.703 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:03:05.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 18 15:03:05.299: INFO: Number of nodes with available pods: 0
Dec 18 15:03:05.299: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:03:07.094: INFO: Number of nodes with available pods: 0
Dec 18 15:03:07.094: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:03:07.440: INFO: Number of nodes with available pods: 0
Dec 18 15:03:07.440: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:03:08.351: INFO: Number of nodes with available pods: 0
Dec 18 15:03:08.351: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:03:09.333: INFO: Number of nodes with available pods: 0
Dec 18 15:03:09.333: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:03:13.626: INFO: Number of nodes with available pods: 0
Dec 18 15:03:13.626: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:03:14.849: INFO: Number of nodes with available pods: 0
Dec 18 15:03:14.849: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:03:15.315: INFO: Number of nodes with available pods: 0
Dec 18 15:03:15.315: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:03:16.447: INFO: Number of nodes with available pods: 0
Dec 18 15:03:16.447: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:03:17.348: INFO: Number of nodes with available pods: 0
Dec 18 15:03:17.348: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:03:18.315: INFO: Number of nodes with available pods: 1
Dec 18 15:03:18.315: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:03:19.318: INFO: Number of nodes with available pods: 2
Dec 18 15:03:19.318: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 18 15:03:19.417: INFO: Number of nodes with available pods: 1
Dec 18 15:03:19.417: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:20.430: INFO: Number of nodes with available pods: 1
Dec 18 15:03:20.430: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:21.452: INFO: Number of nodes with available pods: 1
Dec 18 15:03:21.453: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:22.437: INFO: Number of nodes with available pods: 1
Dec 18 15:03:22.437: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:23.443: INFO: Number of nodes with available pods: 1
Dec 18 15:03:23.444: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:24.455: INFO: Number of nodes with available pods: 1
Dec 18 15:03:24.455: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:25.436: INFO: Number of nodes with available pods: 1
Dec 18 15:03:25.436: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:26.437: INFO: Number of nodes with available pods: 1
Dec 18 15:03:26.438: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:27.438: INFO: Number of nodes with available pods: 1
Dec 18 15:03:27.438: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:28.435: INFO: Number of nodes with available pods: 1
Dec 18 15:03:28.435: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:29.445: INFO: Number of nodes with available pods: 1
Dec 18 15:03:29.445: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:31.126: INFO: Number of nodes with available pods: 1
Dec 18 15:03:31.127: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:31.442: INFO: Number of nodes with available pods: 1
Dec 18 15:03:31.442: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:32.552: INFO: Number of nodes with available pods: 1
Dec 18 15:03:32.553: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:33.437: INFO: Number of nodes with available pods: 1
Dec 18 15:03:33.437: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:37.940: INFO: Number of nodes with available pods: 1
Dec 18 15:03:37.940: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:38.441: INFO: Number of nodes with available pods: 1
Dec 18 15:03:38.441: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:39.444: INFO: Number of nodes with available pods: 1
Dec 18 15:03:39.444: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 18 15:03:40.443: INFO: Number of nodes with available pods: 2
Dec 18 15:03:40.444: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9976, will wait for the garbage collector to delete the pods
Dec 18 15:03:40.525: INFO: Deleting DaemonSet.extensions daemon-set took: 19.208669ms
Dec 18 15:03:40.826: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.938417ms
Dec 18 15:03:48.238: INFO: Number of nodes with available pods: 0
Dec 18 15:03:48.239: INFO: Number of running nodes: 0, number of available pods: 0
Dec 18 15:03:48.247: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9976/daemonsets","resourceVersion":"17154709"},"items":null}

Dec 18 15:03:48.260: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9976/pods","resourceVersion":"17154710"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:03:48.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9976" for this suite.
Dec 18 15:03:54.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:03:54.442: INFO: namespace daemonsets-9976 deletion completed in 6.167870098s

• [SLOW TEST:49.390 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:03:54.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6677.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6677.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6677.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6677.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6677.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6677.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 18 15:04:06.714: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6677/dns-test-c337fc74-dd5e-48b5-a65b-3b1d1c7bb8e1: the server could not find the requested resource (get pods dns-test-c337fc74-dd5e-48b5-a65b-3b1d1c7bb8e1)
Dec 18 15:04:06.721: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6677/dns-test-c337fc74-dd5e-48b5-a65b-3b1d1c7bb8e1: the server could not find the requested resource (get pods dns-test-c337fc74-dd5e-48b5-a65b-3b1d1c7bb8e1)
Dec 18 15:04:06.729: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-6677.svc.cluster.local from pod dns-6677/dns-test-c337fc74-dd5e-48b5-a65b-3b1d1c7bb8e1: the server could not find the requested resource (get pods dns-test-c337fc74-dd5e-48b5-a65b-3b1d1c7bb8e1)
Dec 18 15:04:06.739: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-6677/dns-test-c337fc74-dd5e-48b5-a65b-3b1d1c7bb8e1: the server could not find the requested resource (get pods dns-test-c337fc74-dd5e-48b5-a65b-3b1d1c7bb8e1)
Dec 18 15:04:06.745: INFO: Unable to read jessie_udp@PodARecord from pod dns-6677/dns-test-c337fc74-dd5e-48b5-a65b-3b1d1c7bb8e1: the server could not find the requested resource (get pods dns-test-c337fc74-dd5e-48b5-a65b-3b1d1c7bb8e1)
Dec 18 15:04:06.751: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6677/dns-test-c337fc74-dd5e-48b5-a65b-3b1d1c7bb8e1: the server could not find the requested resource (get pods dns-test-c337fc74-dd5e-48b5-a65b-3b1d1c7bb8e1)
Dec 18 15:04:06.751: INFO: Lookups using dns-6677/dns-test-c337fc74-dd5e-48b5-a65b-3b1d1c7bb8e1 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-6677.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 18 15:04:11.849: INFO: DNS probes using dns-6677/dns-test-c337fc74-dd5e-48b5-a65b-3b1d1c7bb8e1 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:04:11.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6677" for this suite.
Dec 18 15:04:20.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:04:20.179: INFO: namespace dns-6677 deletion completed in 8.1819172s

• [SLOW TEST:25.737 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:04:20.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Dec 18 15:04:20.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2157'
Dec 18 15:04:22.605: INFO: stderr: ""
Dec 18 15:04:22.605: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Dec 18 15:04:23.632: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:04:23.632: INFO: Found 0 / 1
Dec 18 15:04:24.612: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:04:24.613: INFO: Found 0 / 1
Dec 18 15:04:25.621: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:04:25.621: INFO: Found 0 / 1
Dec 18 15:04:26.631: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:04:26.632: INFO: Found 0 / 1
Dec 18 15:04:27.617: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:04:27.617: INFO: Found 0 / 1
Dec 18 15:04:28.634: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:04:28.635: INFO: Found 0 / 1
Dec 18 15:04:29.612: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:04:29.612: INFO: Found 0 / 1
Dec 18 15:04:30.615: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:04:30.615: INFO: Found 1 / 1
Dec 18 15:04:30.615: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 18 15:04:30.620: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:04:30.620: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Dec 18 15:04:30.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tmrvl redis-master --namespace=kubectl-2157'
Dec 18 15:04:30.842: INFO: stderr: ""
Dec 18 15:04:30.843: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 18 Dec 15:04:29.192 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Dec 15:04:29.192 # Server started, Redis version 3.2.12\n1:M 18 Dec 15:04:29.193 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Dec 15:04:29.193 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Dec 18 15:04:30.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tmrvl redis-master --namespace=kubectl-2157 --tail=1'
Dec 18 15:04:31.003: INFO: stderr: ""
Dec 18 15:04:31.003: INFO: stdout: "1:M 18 Dec 15:04:29.193 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Dec 18 15:04:31.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tmrvl redis-master --namespace=kubectl-2157 --limit-bytes=1'
Dec 18 15:04:31.155: INFO: stderr: ""
Dec 18 15:04:31.156: INFO: stdout: " "
STEP: exposing timestamps
Dec 18 15:04:31.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tmrvl redis-master --namespace=kubectl-2157 --tail=1 --timestamps'
Dec 18 15:04:31.314: INFO: stderr: ""
Dec 18 15:04:31.314: INFO: stdout: "2019-12-18T15:04:29.193295577Z 1:M 18 Dec 15:04:29.193 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Dec 18 15:04:33.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tmrvl redis-master --namespace=kubectl-2157 --since=1s'
Dec 18 15:04:34.174: INFO: stderr: ""
Dec 18 15:04:34.174: INFO: stdout: ""
Dec 18 15:04:34.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tmrvl redis-master --namespace=kubectl-2157 --since=24h'
Dec 18 15:04:34.314: INFO: stderr: ""
Dec 18 15:04:34.314: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 18 Dec 15:04:29.192 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Dec 15:04:29.192 # Server started, Redis version 3.2.12\n1:M 18 Dec 15:04:29.193 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Dec 15:04:29.193 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Dec 18 15:04:34.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2157'
Dec 18 15:04:34.422: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 18 15:04:34.423: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Dec 18 15:04:34.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-2157'
Dec 18 15:04:34.719: INFO: stderr: "No resources found.\n"
Dec 18 15:04:34.719: INFO: stdout: ""
Dec 18 15:04:34.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-2157 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 18 15:04:34.855: INFO: stderr: ""
Dec 18 15:04:34.856: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:04:34.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2157" for this suite.
Dec 18 15:04:40.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:04:41.008: INFO: namespace kubectl-2157 deletion completed in 6.142546466s

• [SLOW TEST:20.828 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:04:41.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-226f5a4f-97f6-4129-96d2-3b6262f2f47e in namespace container-probe-6524
Dec 18 15:04:51.202: INFO: Started pod liveness-226f5a4f-97f6-4129-96d2-3b6262f2f47e in namespace container-probe-6524
STEP: checking the pod's current state and verifying that restartCount is present
Dec 18 15:04:51.208: INFO: Initial restart count of pod liveness-226f5a4f-97f6-4129-96d2-3b6262f2f47e is 0
Dec 18 15:05:09.555: INFO: Restart count of pod container-probe-6524/liveness-226f5a4f-97f6-4129-96d2-3b6262f2f47e is now 1 (18.34766413s elapsed)
Dec 18 15:05:29.669: INFO: Restart count of pod container-probe-6524/liveness-226f5a4f-97f6-4129-96d2-3b6262f2f47e is now 2 (38.461545574s elapsed)
Dec 18 15:05:49.901: INFO: Restart count of pod container-probe-6524/liveness-226f5a4f-97f6-4129-96d2-3b6262f2f47e is now 3 (58.693052978s elapsed)
Dec 18 15:06:10.026: INFO: Restart count of pod container-probe-6524/liveness-226f5a4f-97f6-4129-96d2-3b6262f2f47e is now 4 (1m18.818789079s elapsed)
Dec 18 15:07:10.448: INFO: Restart count of pod container-probe-6524/liveness-226f5a4f-97f6-4129-96d2-3b6262f2f47e is now 5 (2m19.240352715s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:07:10.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6524" for this suite.
Dec 18 15:07:16.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:07:16.811: INFO: namespace container-probe-6524 deletion completed in 6.285568732s

• [SLOW TEST:155.802 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:07:16.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-2e6fd3e9-1961-4629-a239-797690bba3fc
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:07:16.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2896" for this suite.
Dec 18 15:07:22.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:07:22.999: INFO: namespace secrets-2896 deletion completed in 6.111499291s

• [SLOW TEST:6.188 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:07:23.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 15:07:23.115: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 18 15:07:23.126: INFO: Number of nodes with available pods: 0
Dec 18 15:07:23.126: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 18 15:07:23.296: INFO: Number of nodes with available pods: 0
Dec 18 15:07:23.297: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:24.306: INFO: Number of nodes with available pods: 0
Dec 18 15:07:24.306: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:25.309: INFO: Number of nodes with available pods: 0
Dec 18 15:07:25.309: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:26.305: INFO: Number of nodes with available pods: 0
Dec 18 15:07:26.305: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:27.314: INFO: Number of nodes with available pods: 0
Dec 18 15:07:27.314: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:28.308: INFO: Number of nodes with available pods: 0
Dec 18 15:07:28.308: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:29.310: INFO: Number of nodes with available pods: 0
Dec 18 15:07:29.310: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:30.305: INFO: Number of nodes with available pods: 0
Dec 18 15:07:30.305: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:31.308: INFO: Number of nodes with available pods: 1
Dec 18 15:07:31.309: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 18 15:07:31.367: INFO: Number of nodes with available pods: 1
Dec 18 15:07:31.367: INFO: Number of running nodes: 0, number of available pods: 1
Dec 18 15:07:32.376: INFO: Number of nodes with available pods: 0
Dec 18 15:07:32.376: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 18 15:07:32.398: INFO: Number of nodes with available pods: 0
Dec 18 15:07:32.398: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:33.414: INFO: Number of nodes with available pods: 0
Dec 18 15:07:33.414: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:34.406: INFO: Number of nodes with available pods: 0
Dec 18 15:07:34.407: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:35.774: INFO: Number of nodes with available pods: 0
Dec 18 15:07:35.775: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:36.415: INFO: Number of nodes with available pods: 0
Dec 18 15:07:36.415: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:37.409: INFO: Number of nodes with available pods: 0
Dec 18 15:07:37.409: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:38.418: INFO: Number of nodes with available pods: 0
Dec 18 15:07:38.418: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:39.415: INFO: Number of nodes with available pods: 0
Dec 18 15:07:39.415: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:40.422: INFO: Number of nodes with available pods: 0
Dec 18 15:07:40.422: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:41.419: INFO: Number of nodes with available pods: 0
Dec 18 15:07:41.419: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:42.411: INFO: Number of nodes with available pods: 0
Dec 18 15:07:42.411: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:43.410: INFO: Number of nodes with available pods: 0
Dec 18 15:07:43.411: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:44.407: INFO: Number of nodes with available pods: 0
Dec 18 15:07:44.407: INFO: Node iruya-node is running more than one daemon pod
Dec 18 15:07:45.433: INFO: Number of nodes with available pods: 1
Dec 18 15:07:45.434: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7029, will wait for the garbage collector to delete the pods
Dec 18 15:07:45.518: INFO: Deleting DaemonSet.extensions daemon-set took: 21.705004ms
Dec 18 15:07:45.919: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.039417ms
Dec 18 15:07:53.431: INFO: Number of nodes with available pods: 0
Dec 18 15:07:53.431: INFO: Number of running nodes: 0, number of available pods: 0
Dec 18 15:07:53.438: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7029/daemonsets","resourceVersion":"17155246"},"items":null}

Dec 18 15:07:53.443: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7029/pods","resourceVersion":"17155246"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:07:53.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7029" for this suite.
Dec 18 15:07:59.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:08:00.562: INFO: namespace daemonsets-7029 deletion completed in 6.995108263s

• [SLOW TEST:37.563 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:08:00.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-4129/secret-test-fb3bde2b-b275-4330-a540-b573993a19d2
STEP: Creating a pod to test consume secrets
Dec 18 15:08:00.901: INFO: Waiting up to 5m0s for pod "pod-configmaps-718f09b3-c892-4606-8c93-5bbdc3f5d33d" in namespace "secrets-4129" to be "success or failure"
Dec 18 15:08:00.957: INFO: Pod "pod-configmaps-718f09b3-c892-4606-8c93-5bbdc3f5d33d": Phase="Pending", Reason="", readiness=false. Elapsed: 55.543666ms
Dec 18 15:08:02.965: INFO: Pod "pod-configmaps-718f09b3-c892-4606-8c93-5bbdc3f5d33d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063334979s
Dec 18 15:08:04.986: INFO: Pod "pod-configmaps-718f09b3-c892-4606-8c93-5bbdc3f5d33d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084389797s
Dec 18 15:08:07.329: INFO: Pod "pod-configmaps-718f09b3-c892-4606-8c93-5bbdc3f5d33d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427227297s
Dec 18 15:08:09.392: INFO: Pod "pod-configmaps-718f09b3-c892-4606-8c93-5bbdc3f5d33d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.490473213s
Dec 18 15:08:11.416: INFO: Pod "pod-configmaps-718f09b3-c892-4606-8c93-5bbdc3f5d33d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.514652625s
STEP: Saw pod success
Dec 18 15:08:11.417: INFO: Pod "pod-configmaps-718f09b3-c892-4606-8c93-5bbdc3f5d33d" satisfied condition "success or failure"
Dec 18 15:08:11.425: INFO: Trying to get logs from node iruya-node pod pod-configmaps-718f09b3-c892-4606-8c93-5bbdc3f5d33d container env-test: 
STEP: delete the pod
Dec 18 15:08:11.881: INFO: Waiting for pod pod-configmaps-718f09b3-c892-4606-8c93-5bbdc3f5d33d to disappear
Dec 18 15:08:11.897: INFO: Pod pod-configmaps-718f09b3-c892-4606-8c93-5bbdc3f5d33d no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:08:11.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4129" for this suite.
Dec 18 15:08:17.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:08:18.107: INFO: namespace secrets-4129 deletion completed in 6.158644458s

• [SLOW TEST:17.544 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:08:18.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1218 15:08:35.680705       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 18 15:08:35.680: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:08:35.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4136" for this suite.
Dec 18 15:08:46.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:08:46.339: INFO: namespace gc-4136 deletion completed in 9.785058537s

• [SLOW TEST:28.231 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:08:46.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 15:08:46.796: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:08:50.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3806" for this suite.
Dec 18 15:08:56.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:08:56.200: INFO: namespace custom-resource-definition-3806 deletion completed in 6.174197418s

• [SLOW TEST:9.861 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:08:56.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1218 15:09:26.964790       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 18 15:09:26.964: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:09:26.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1407" for this suite.
Dec 18 15:09:37.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:09:37.885: INFO: namespace gc-1407 deletion completed in 10.911978272s

• [SLOW TEST:41.685 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:09:37.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-591ac88e-6f35-400e-b5fd-7f22aca2966b in namespace container-probe-1540
Dec 18 15:09:46.208: INFO: Started pod liveness-591ac88e-6f35-400e-b5fd-7f22aca2966b in namespace container-probe-1540
STEP: checking the pod's current state and verifying that restartCount is present
Dec 18 15:09:46.211: INFO: Initial restart count of pod liveness-591ac88e-6f35-400e-b5fd-7f22aca2966b is 0
Dec 18 15:10:13.080: INFO: Restart count of pod container-probe-1540/liveness-591ac88e-6f35-400e-b5fd-7f22aca2966b is now 1 (26.869103927s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:10:13.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1540" for this suite.
Dec 18 15:10:19.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:10:19.290: INFO: namespace container-probe-1540 deletion completed in 6.17070342s

• [SLOW TEST:41.404 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:10:19.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 15:10:19.587: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"451d0826-f0dc-4849-989a-a070645a0243", Controller:(*bool)(0xc000d07d82), BlockOwnerDeletion:(*bool)(0xc000d07d83)}}
Dec 18 15:10:19.598: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ef521d29-88e6-4016-bdf1-5cc53ae382b6", Controller:(*bool)(0xc002608b6a), BlockOwnerDeletion:(*bool)(0xc002608b6b)}}
Dec 18 15:10:19.633: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9ece81b3-897c-427c-879d-3f8ddc43810b", Controller:(*bool)(0xc002608d32), BlockOwnerDeletion:(*bool)(0xc002608d33)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:10:24.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6033" for this suite.
Dec 18 15:10:30.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:10:31.006: INFO: namespace gc-6033 deletion completed in 6.182220343s

• [SLOW TEST:11.715 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:10:31.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-d1380e44-2ecd-4908-a78e-cd9248159406
STEP: Creating a pod to test consume configMaps
Dec 18 15:10:31.109: INFO: Waiting up to 5m0s for pod "pod-configmaps-b73ff0e9-3ad0-4119-8478-fb445600825f" in namespace "configmap-1595" to be "success or failure"
Dec 18 15:10:31.115: INFO: Pod "pod-configmaps-b73ff0e9-3ad0-4119-8478-fb445600825f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.300579ms
Dec 18 15:10:33.127: INFO: Pod "pod-configmaps-b73ff0e9-3ad0-4119-8478-fb445600825f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018125685s
Dec 18 15:10:35.140: INFO: Pod "pod-configmaps-b73ff0e9-3ad0-4119-8478-fb445600825f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030742738s
Dec 18 15:10:37.151: INFO: Pod "pod-configmaps-b73ff0e9-3ad0-4119-8478-fb445600825f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041803101s
Dec 18 15:10:39.291: INFO: Pod "pod-configmaps-b73ff0e9-3ad0-4119-8478-fb445600825f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.182154645s
Dec 18 15:10:41.303: INFO: Pod "pod-configmaps-b73ff0e9-3ad0-4119-8478-fb445600825f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.194299751s
STEP: Saw pod success
Dec 18 15:10:41.304: INFO: Pod "pod-configmaps-b73ff0e9-3ad0-4119-8478-fb445600825f" satisfied condition "success or failure"
Dec 18 15:10:41.311: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b73ff0e9-3ad0-4119-8478-fb445600825f container configmap-volume-test: 
STEP: delete the pod
Dec 18 15:10:41.474: INFO: Waiting for pod pod-configmaps-b73ff0e9-3ad0-4119-8478-fb445600825f to disappear
Dec 18 15:10:41.486: INFO: Pod pod-configmaps-b73ff0e9-3ad0-4119-8478-fb445600825f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:10:41.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1595" for this suite.
Dec 18 15:10:47.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:10:47.763: INFO: namespace configmap-1595 deletion completed in 6.257026143s

• [SLOW TEST:16.757 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:10:47.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:10:53.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5220" for this suite.
Dec 18 15:10:59.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:10:59.592: INFO: namespace watch-5220 deletion completed in 6.316829217s

• [SLOW TEST:11.829 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 18 15:10:59.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 18 15:10:59.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9751'
Dec 18 15:11:00.384: INFO: stderr: ""
Dec 18 15:11:00.384: INFO: stdout: "replicationcontroller/redis-master created\n"
Dec 18 15:11:00.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9751'
Dec 18 15:11:01.208: INFO: stderr: ""
Dec 18 15:11:01.208: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 18 15:11:02.219: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:11:02.219: INFO: Found 0 / 1
Dec 18 15:11:03.226: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:11:03.227: INFO: Found 0 / 1
Dec 18 15:11:04.221: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:11:04.221: INFO: Found 0 / 1
Dec 18 15:11:05.708: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:11:05.709: INFO: Found 0 / 1
Dec 18 15:11:06.227: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:11:06.227: INFO: Found 0 / 1
Dec 18 15:11:07.215: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:11:07.215: INFO: Found 0 / 1
Dec 18 15:11:08.225: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:11:08.226: INFO: Found 0 / 1
Dec 18 15:11:09.217: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:11:09.217: INFO: Found 1 / 1
Dec 18 15:11:09.217: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 18 15:11:09.222: INFO: Selector matched 1 pods for map[app:redis]
Dec 18 15:11:09.222: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 18 15:11:09.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-dc6qj --namespace=kubectl-9751'
Dec 18 15:11:09.453: INFO: stderr: ""
Dec 18 15:11:09.454: INFO: stdout: "Name:           redis-master-dc6qj\nNamespace:      kubectl-9751\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Wed, 18 Dec 2019 15:11:00 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://d6d52647d11954c1f77210517f7aa08ceb35d72ca64240734aadcd5a360492d6\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 18 Dec 2019 15:11:08 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7fghz (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-7fghz:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-7fghz\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  9s    default-scheduler    Successfully assigned kubectl-9751/redis-master-dc6qj to iruya-node\n  Normal  Pulled     4s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Dec 18 15:11:09.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9751'
Dec 18 15:11:09.618: INFO: stderr: ""
Dec 18 15:11:09.618: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-9751\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: redis-master-dc6qj\n"
Dec 18 15:11:09.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9751'
Dec 18 15:11:09.738: INFO: stderr: ""
Dec 18 15:11:09.739: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-9751\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.109.11.187\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Dec 18 15:11:09.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Dec 18 15:11:09.908: INFO: stderr: ""
Dec 18 15:11:09.908: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Wed, 18 Dec 2019 15:11:05 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Wed, 18 Dec 2019 15:11:05 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Wed, 18 Dec 2019 15:11:05 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Wed, 18 Dec 2019 15:11:05 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         136d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         67d\n  kubectl-9751               redis-master-dc6qj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Dec 18 15:11:09.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9751'
Dec 18 15:11:10.059: INFO: stderr: ""
Dec 18 15:11:10.059: INFO: stdout: "Name:         kubectl-9751\nLabels:       e2e-framework=kubectl\n              e2e-run=cf51e942-6928-4af1-b147-86224600ce26\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 18 15:11:10.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9751" for this suite.
Dec 18 15:11:32.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 15:11:32.188: INFO: namespace kubectl-9751 deletion completed in 22.125745667s

• [SLOW TEST:32.595 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSDec 18 15:11:32.189: INFO: Running AfterSuite actions on all nodes
Dec 18 15:11:32.189: INFO: Running AfterSuite actions on node 1
Dec 18 15:11:32.189: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8122.062 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS